Test Report: KVM_Linux 17217

                    
                      8716ac0c8da6d39536faafa0827bebe41e78f6a6:2023-09-14:31013
                    
                

Test fail (2/317)

Order failed test Duration
214 TestMultiNode/serial/RestartKeepsNodes 112.17
215 TestMultiNode/serial/DeleteNode 3.02
x
+
TestMultiNode/serial/RestartKeepsNodes (112.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040952
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-040952
E0914 19:05:19.531409   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-040952: (28.451242182s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr
E0914 19:06:02.658871   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:06:10.628985   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr: exit status 90 (1m21.375560389s)

                                                
                                                
-- stdout --
	* [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-040952 in cluster multinode-040952
	* Restarting existing kvm2 VM for "multinode-040952" ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-040952-m02 in cluster multinode-040952
	* Restarting existing kvm2 VM for "multinode-040952-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.14
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:05:20.962804   29302 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:05:20.963060   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963070   29302 out.go:309] Setting ErrFile to fd 2...
	I0914 19:05:20.963075   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963243   29302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:05:20.963781   29302 out.go:303] Setting JSON to false
	I0914 19:05:20.964724   29302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2870,"bootTime":1694715451,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 19:05:20.964780   29302 start.go:138] virtualization: kvm guest
	I0914 19:05:20.967109   29302 out.go:177] * [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 19:05:20.968562   29302 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 19:05:20.969984   29302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 19:05:20.968648   29302 notify.go:220] Checking for updates...
	I0914 19:05:20.972859   29302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:20.974265   29302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 19:05:20.975509   29302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 19:05:20.976805   29302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 19:05:20.978678   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:20.978756   29302 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 19:05:20.979122   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:20.979158   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:20.994127   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0914 19:05:20.994544   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:20.994996   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:20.995035   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:20.995534   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:20.995713   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.030837   29302 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 19:05:21.032222   29302 start.go:298] selected driver: kvm2
	I0914 19:05:21.032235   29302 start.go:902] validating driver "kvm2" against &{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:21.032388   29302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 19:05:21.032684   29302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.032744   29302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 19:05:21.046926   29302 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 19:05:21.047549   29302 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 19:05:21.047615   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:21.047628   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:21.047635   29302 start_flags.go:321] config:
	{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0914 19:05:21.047846   29302 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.049820   29302 out.go:177] * Starting control plane node multinode-040952 in cluster multinode-040952
	I0914 19:05:21.051078   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:21.051117   29302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0914 19:05:21.051132   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:05:21.051200   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:05:21.051211   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:05:21.051357   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:21.051546   29302 start.go:365] acquiring machines lock for multinode-040952: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:05:21.051585   29302 start.go:369] acquired machines lock for "multinode-040952" in 22.658µs
	I0914 19:05:21.051598   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:05:21.051604   29302 fix.go:54] fixHost starting: 
	I0914 19:05:21.051851   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:21.051877   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:21.065211   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
	I0914 19:05:21.065673   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:21.066137   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:21.066161   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:21.066462   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:21.066623   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.066770   29302 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:05:21.068116   29302 fix.go:102] recreateIfNeeded on multinode-040952: state=Stopped err=<nil>
	I0914 19:05:21.068149   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	W0914 19:05:21.068327   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:05:21.070143   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952" ...
	I0914 19:05:21.071437   29302 main.go:141] libmachine: (multinode-040952) Calling .Start
	I0914 19:05:21.071593   29302 main.go:141] libmachine: (multinode-040952) Ensuring networks are active...
	I0914 19:05:21.072249   29302 main.go:141] libmachine: (multinode-040952) Ensuring network default is active
	I0914 19:05:21.072599   29302 main.go:141] libmachine: (multinode-040952) Ensuring network mk-multinode-040952 is active
	I0914 19:05:21.072924   29302 main.go:141] libmachine: (multinode-040952) Getting domain xml...
	I0914 19:05:21.073627   29302 main.go:141] libmachine: (multinode-040952) Creating domain...
	I0914 19:05:22.290792   29302 main.go:141] libmachine: (multinode-040952) Waiting to get IP...
	I0914 19:05:22.291697   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.292055   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.292102   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.292035   29331 retry.go:31] will retry after 308.296154ms: waiting for machine to come up
	I0914 19:05:22.601636   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.602066   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.602099   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.602024   29331 retry.go:31] will retry after 317.837388ms: waiting for machine to come up
	I0914 19:05:22.921508   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.921867   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.921901   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.921847   29331 retry.go:31] will retry after 471.086167ms: waiting for machine to come up
	I0914 19:05:23.394404   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.394838   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.394871   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.394792   29331 retry.go:31] will retry after 484.306086ms: waiting for machine to come up
	I0914 19:05:23.880204   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.880564   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.880583   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.880535   29331 retry.go:31] will retry after 618.601122ms: waiting for machine to come up
	I0914 19:05:24.500881   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:24.501312   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:24.501338   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:24.501260   29331 retry.go:31] will retry after 909.340951ms: waiting for machine to come up
	I0914 19:05:25.412225   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:25.412602   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:25.412643   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:25.412551   29331 retry.go:31] will retry after 1.126879825s: waiting for machine to come up
	I0914 19:05:26.540657   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:26.541060   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:26.541092   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:26.541009   29331 retry.go:31] will retry after 1.102019824s: waiting for machine to come up
	I0914 19:05:27.644123   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:27.644509   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:27.644533   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:27.644464   29331 retry.go:31] will retry after 1.486754446s: waiting for machine to come up
	I0914 19:05:29.133039   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:29.133510   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:29.133535   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:29.133470   29331 retry.go:31] will retry after 2.117464983s: waiting for machine to come up
	I0914 19:05:31.252796   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:31.253157   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:31.253189   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:31.253114   29331 retry.go:31] will retry after 2.386416431s: waiting for machine to come up
	I0914 19:05:33.642490   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:33.643052   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:33.643079   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:33.643013   29331 retry.go:31] will retry after 2.611013914s: waiting for machine to come up
	I0914 19:05:36.255832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:36.256237   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:36.256259   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:36.256195   29331 retry.go:31] will retry after 4.317080822s: waiting for machine to come up
	I0914 19:05:40.578744   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579178   29302 main.go:141] libmachine: (multinode-040952) Found IP for machine: 192.168.39.14
	I0914 19:05:40.579199   29302 main.go:141] libmachine: (multinode-040952) Reserving static IP address...
	I0914 19:05:40.579208   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has current primary IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579755   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.579790   29302 main.go:141] libmachine: (multinode-040952) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"}
	I0914 19:05:40.579808   29302 main.go:141] libmachine: (multinode-040952) Reserved static IP address: 192.168.39.14
	I0914 19:05:40.579828   29302 main.go:141] libmachine: (multinode-040952) Waiting for SSH to be available...
	I0914 19:05:40.579844   29302 main.go:141] libmachine: (multinode-040952) DBG | Getting to WaitForSSH function...
	I0914 19:05:40.581922   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582219   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.582248   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582419   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH client type: external
	I0914 19:05:40.582441   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa (-rw-------)
	I0914 19:05:40.582466   29302 main.go:141] libmachine: (multinode-040952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:05:40.582480   29302 main.go:141] libmachine: (multinode-040952) DBG | About to run SSH command:
	I0914 19:05:40.582491   29302 main.go:141] libmachine: (multinode-040952) DBG | exit 0
	I0914 19:05:40.677125   29302 main.go:141] libmachine: (multinode-040952) DBG | SSH cmd err, output: <nil>: 
	I0914 19:05:40.677493   29302 main.go:141] libmachine: (multinode-040952) Calling .GetConfigRaw
	I0914 19:05:40.678081   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.680506   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.680910   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.680945   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.681103   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:40.681284   29302 machine.go:88] provisioning docker machine ...
	I0914 19:05:40.681323   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:40.681566   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681734   29302 buildroot.go:166] provisioning hostname "multinode-040952"
	I0914 19:05:40.681755   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681906   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.683964   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684284   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.684307   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684417   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.684595   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684736   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684890   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.685062   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.685397   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.685412   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952 && echo "multinode-040952" | sudo tee /etc/hostname
	I0914 19:05:40.823251   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952
	
	I0914 19:05:40.823283   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.825791   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826169   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.826206   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826321   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.826510   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826658   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826793   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.826952   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.827274   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.827292   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:05:40.958211   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:05:40.958234   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:05:40.958251   29302 buildroot.go:174] setting up certificates
	I0914 19:05:40.958258   29302 provision.go:83] configureAuth start
	I0914 19:05:40.958270   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.958579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.960950   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961279   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.961310   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961443   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.963552   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.964139   29302 provision.go:138] copyHostCerts
	I0914 19:05:40.966068   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.966080   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966098   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.966106   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:05:40.966111   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966169   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:05:40.966263   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966284   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:05:40.966291   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966314   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:05:40.966407   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966426   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:05:40.966429   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966455   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:05:40.966496   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952 san=[192.168.39.14 192.168.39.14 localhost 127.0.0.1 minikube multinode-040952]
	I0914 19:05:41.093709   29302 provision.go:172] copyRemoteCerts
	I0914 19:05:41.093761   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:05:41.093784   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.096513   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.096889   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.096919   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.097089   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.097303   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.097427   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.097563   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:41.185959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:05:41.186035   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:05:41.209076   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:05:41.209136   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 19:05:41.231360   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:05:41.231432   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 19:05:41.253346   29302 provision.go:86] duration metric: configureAuth took 295.075916ms
	I0914 19:05:41.253364   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:05:41.253583   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:41.253604   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:41.253889   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.256397   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256706   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.256746   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256796   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.256990   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257147   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257300   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.257433   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.257764   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.257781   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:05:41.378606   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:05:41.378636   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:05:41.378779   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:05:41.378811   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.381344   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381631   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.381653   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381854   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.382017   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382151   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382256   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.382401   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.382846   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.382955   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:05:41.524710   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:05:41.524751   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.527598   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528021   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.528050   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528233   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.528403   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528520   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528618   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.528833   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.529147   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.529175   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:05:42.395560   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:05:42.395591   29302 machine.go:91] provisioned docker machine in 1.714293106s
	I0914 19:05:42.395605   29302 start.go:300] post-start starting for "multinode-040952" (driver="kvm2")
	I0914 19:05:42.395617   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:05:42.395637   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.395990   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:05:42.396021   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.398544   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.398997   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.399029   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.399146   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.399327   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.399452   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.399604   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.490598   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:05:42.494659   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:05:42.494675   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:05:42.494679   29302 command_runner.go:130] > ID=buildroot
	I0914 19:05:42.494684   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:05:42.494689   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:05:42.494714   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:05:42.494726   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:05:42.494786   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:05:42.494859   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:05:42.494867   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:05:42.494949   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:05:42.504158   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:42.526832   29302 start.go:303] post-start completed in 131.213234ms
	I0914 19:05:42.526851   29302 fix.go:56] fixHost completed within 21.475246623s
	I0914 19:05:42.526869   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.529527   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.529937   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.529986   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.530137   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.530338   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530471   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530592   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.530728   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:42.531030   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:42.531041   29302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 19:05:42.654398   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718342.602499385
	
	I0914 19:05:42.654428   29302 fix.go:206] guest clock: 1694718342.602499385
	I0914 19:05:42.654435   29302 fix.go:219] Guest: 2023-09-14 19:05:42.602499385 +0000 UTC Remote: 2023-09-14 19:05:42.526854621 +0000 UTC m=+21.595630701 (delta=75.644764ms)
	I0914 19:05:42.654452   29302 fix.go:190] guest clock delta is within tolerance: 75.644764ms
	I0914 19:05:42.654457   29302 start.go:83] releasing machines lock for "multinode-040952", held for 21.60286411s
	I0914 19:05:42.654478   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.654724   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:42.657287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657640   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.657674   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657831   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658283   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658453   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658514   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:05:42.658551   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.658645   29302 ssh_runner.go:195] Run: cat /version.json
	I0914 19:05:42.658666   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.660832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661105   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661257   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661432   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661445   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661474   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661683   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661749   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661825   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661884   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661944   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.661988   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.746664   29302 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "08513a9f809e39764bdb93fc427d760a652ba5ea"}
	I0914 19:05:42.747194   29302 ssh_runner.go:195] Run: systemctl --version
	I0914 19:05:42.773722   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:05:42.773771   29302 command_runner.go:130] > systemd 247 (247)
	I0914 19:05:42.773794   29302 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0914 19:05:42.773870   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:05:42.779663   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:05:42.779691   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:05:42.779753   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:05:42.796458   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:05:42.796494   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:05:42.796506   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:42.796618   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:42.814727   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:05:42.815085   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:05:42.825286   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:05:42.835590   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:05:42.835639   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:05:42.845397   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.855075   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:05:42.864775   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.874625   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:05:42.885032   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:05:42.895300   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:05:42.904333   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:05:42.904406   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:05:42.913443   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.014402   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:05:43.034266   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:43.034341   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:05:43.046339   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:05:43.047277   29302 command_runner.go:130] > [Unit]
	I0914 19:05:43.047292   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:05:43.047300   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:05:43.047311   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:05:43.047321   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:05:43.047330   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:05:43.047340   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:05:43.047347   29302 command_runner.go:130] > [Service]
	I0914 19:05:43.047354   29302 command_runner.go:130] > Type=notify
	I0914 19:05:43.047374   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:05:43.047387   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:05:43.047408   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:05:43.047423   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:05:43.047437   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:05:43.047453   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:05:43.047465   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:05:43.047478   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:05:43.047499   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:05:43.047514   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:05:43.047523   29302 command_runner.go:130] > ExecStart=
	I0914 19:05:43.047549   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:05:43.047562   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:05:43.047574   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:05:43.047589   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:05:43.047600   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:05:43.047609   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:05:43.047619   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:05:43.047632   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:05:43.047647   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:05:43.047657   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:05:43.047668   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:05:43.047682   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:05:43.047692   29302 command_runner.go:130] > Delegate=yes
	I0914 19:05:43.047706   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:05:43.047716   29302 command_runner.go:130] > KillMode=process
	I0914 19:05:43.047721   29302 command_runner.go:130] > [Install]
	I0914 19:05:43.047732   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:05:43.047831   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.059348   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:05:43.076586   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.091070   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.103630   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:05:43.127566   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.140558   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:43.157218   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:05:43.157773   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:05:43.161227   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:05:43.161332   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:05:43.168999   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:05:43.184057   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:05:43.293264   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:05:43.399283   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:05:43.399314   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:05:43.416580   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.527824   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:05:43.992016   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.097079   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:05:44.209025   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.320513   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.428053   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:05:44.444720   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.552820   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 19:05:44.632416   29302 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 19:05:44.632491   29302 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 19:05:44.638252   29302 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0914 19:05:44.638276   29302 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 19:05:44.638286   29302 command_runner.go:130] > Device: 16h/22d	Inode: 831         Links: 1
	I0914 19:05:44.638296   29302 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0914 19:05:44.638305   29302 command_runner.go:130] > Access: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638313   29302 command_runner.go:130] > Modify: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638326   29302 command_runner.go:130] > Change: 2023-09-14 19:05:44.517543091 +0000
	I0914 19:05:44.638332   29302 command_runner.go:130] >  Birth: -
	I0914 19:05:44.638715   29302 start.go:537] Will wait 60s for crictl version
	I0914 19:05:44.638765   29302 ssh_runner.go:195] Run: which crictl
	I0914 19:05:44.642939   29302 command_runner.go:130] > /usr/bin/crictl
	I0914 19:05:44.643309   29302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 19:05:44.681642   29302 command_runner.go:130] > Version:  0.1.0
	I0914 19:05:44.681667   29302 command_runner.go:130] > RuntimeName:  docker
	I0914 19:05:44.681672   29302 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0914 19:05:44.681678   29302 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683160   29302 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683219   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.707204   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.708405   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.736598   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.738686   29302 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 19:05:44.738719   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:44.741297   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741690   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:44.741717   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741894   29302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 19:05:44.745777   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.758482   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:44.758533   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.777353   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.777369   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.777375   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.777380   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.777385   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.777389   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.777395   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.777399   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.777404   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.777409   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.777499   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.777521   29302 docker.go:566] Images already preloaded, skipping extraction
	I0914 19:05:44.777580   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.796442   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.796466   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.796474   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.796487   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.796495   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.796502   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.796510   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.796517   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.796526   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.796533   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.796582   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.796603   29302 cache_images.go:84] Images are preloaded, skipping loading
	I0914 19:05:44.796662   29302 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 19:05:44.826844   29302 command_runner.go:130] > cgroupfs
	I0914 19:05:44.827994   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:44.828012   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:44.828028   29302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 19:05:44.828050   29302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-040952 NodeName:multinode-040952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 19:05:44.828163   29302 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-040952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 19:05:44.828241   29302 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-040952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 19:05:44.828290   29302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 19:05:44.837426   29302 command_runner.go:130] > kubeadm
	I0914 19:05:44.837444   29302 command_runner.go:130] > kubectl
	I0914 19:05:44.837448   29302 command_runner.go:130] > kubelet
	I0914 19:05:44.837478   29302 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 19:05:44.837538   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 19:05:44.845710   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 19:05:44.861289   29302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 19:05:44.876364   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0914 19:05:44.892748   29302 ssh_runner.go:195] Run: grep 192.168.39.14	control-plane.minikube.internal$ /etc/hosts
	I0914 19:05:44.896225   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.908521   29302 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952 for IP: 192.168.39.14
	I0914 19:05:44.908554   29302 certs.go:190] acquiring lock for shared ca certs: {Name:mk8231a646ae91c44c394a9ea29f867fd3f74220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:44.908702   29302 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key
	I0914 19:05:44.908750   29302 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key
	I0914 19:05:44.908825   29302 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key
	I0914 19:05:44.908896   29302 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key.ba52ec04
	I0914 19:05:44.908936   29302 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key
	I0914 19:05:44.908959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 19:05:44.908984   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 19:05:44.909003   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 19:05:44.909021   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 19:05:44.909038   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 19:05:44.909057   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 19:05:44.909069   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 19:05:44.909083   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 19:05:44.909133   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem (1338 bytes)
	W0914 19:05:44.909164   29302 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506_empty.pem, impossibly tiny 0 bytes
	I0914 19:05:44.909175   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 19:05:44.909194   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem (1082 bytes)
	I0914 19:05:44.909221   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem (1123 bytes)
	I0914 19:05:44.909246   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem (1679 bytes)
	I0914 19:05:44.909284   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:44.909309   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem -> /usr/share/ca-certificates/14506.pem
	I0914 19:05:44.909322   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /usr/share/ca-certificates/145062.pem
	I0914 19:05:44.909336   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:44.909846   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 19:05:44.934419   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 19:05:44.957511   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 19:05:44.980559   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 19:05:45.004923   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 19:05:45.028375   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 19:05:45.051817   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 19:05:45.074510   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 19:05:45.098260   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem --> /usr/share/ca-certificates/14506.pem (1338 bytes)
	I0914 19:05:45.121292   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /usr/share/ca-certificates/145062.pem (1708 bytes)
	I0914 19:05:45.144038   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 19:05:45.166026   29302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 19:05:45.181807   29302 ssh_runner.go:195] Run: openssl version
	I0914 19:05:45.187376   29302 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 19:05:45.187428   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14506.pem && ln -fs /usr/share/ca-certificates/14506.pem /etc/ssl/certs/14506.pem"
	I0914 19:05:45.196849   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201160   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201218   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201259   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.206455   29302 command_runner.go:130] > 51391683
	I0914 19:05:45.206657   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14506.pem /etc/ssl/certs/51391683.0"
	I0914 19:05:45.216148   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145062.pem && ln -fs /usr/share/ca-certificates/145062.pem /etc/ssl/certs/145062.pem"
	I0914 19:05:45.225498   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229584   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229749   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229794   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.235209   29302 command_runner.go:130] > 3ec20f2e
	I0914 19:05:45.235283   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145062.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 19:05:45.244557   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 19:05:45.253825   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258352   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258379   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258421   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.263679   29302 command_runner.go:130] > b5213941
	I0914 19:05:45.263724   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 19:05:45.273201   29302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 19:05:45.277387   29302 command_runner.go:130] > ca.crt
	I0914 19:05:45.277404   29302 command_runner.go:130] > ca.key
	I0914 19:05:45.277412   29302 command_runner.go:130] > healthcheck-client.crt
	I0914 19:05:45.277419   29302 command_runner.go:130] > healthcheck-client.key
	I0914 19:05:45.277426   29302 command_runner.go:130] > peer.crt
	I0914 19:05:45.277433   29302 command_runner.go:130] > peer.key
	I0914 19:05:45.277439   29302 command_runner.go:130] > server.crt
	I0914 19:05:45.277446   29302 command_runner.go:130] > server.key
	I0914 19:05:45.277502   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 19:05:45.283251   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.283310   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 19:05:45.289331   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.289405   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 19:05:45.295261   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.295329   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 19:05:45.300680   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.300910   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 19:05:45.306424   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.306599   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 19:05:45.311906   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.312249   29302 kubeadm.go:404] StartCluster: {Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:45.312423   29302 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:45.331162   29302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 19:05:45.340190   29302 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0914 19:05:45.340212   29302 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0914 19:05:45.340221   29302 command_runner.go:130] > /var/lib/minikube/etcd:
	I0914 19:05:45.340226   29302 command_runner.go:130] > member
	I0914 19:05:45.340246   29302 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 19:05:45.340267   29302 kubeadm.go:636] restartCluster start
	I0914 19:05:45.340309   29302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 19:05:45.348452   29302 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.348894   29302 kubeconfig.go:135] verify returned: extract IP: "multinode-040952" does not appear in /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.348998   29302 kubeconfig.go:146] "multinode-040952" context is missing from /home/jenkins/minikube-integration/17217-7285/kubeconfig - will repair!
	I0914 19:05:45.349266   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:45.349662   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.349849   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:05:45.350444   29302 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 19:05:45.350587   29302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 19:05:45.358418   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.358456   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.368403   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.368429   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.368512   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.378454   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.879114   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.879187   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.890404   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.379073   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.379137   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.390460   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.878635   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.878712   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.890234   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.378771   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.378861   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.390972   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.879569   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.879636   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.891015   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.378618   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.378691   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.390037   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.878591   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.878656   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.889682   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.379283   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.379348   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.390298   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.878830   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.878929   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.890070   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.378594   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.378669   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.389750   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.879406   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.879474   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.890792   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.378749   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.378818   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.390362   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.878913   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.878983   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.890684   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.379313   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.390412   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.878965   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.879054   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.890079   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.378659   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.378734   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.389835   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.879480   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.879549   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.890643   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.379316   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.390543   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.879126   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.879190   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.890939   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:55.358694   29302 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 19:05:55.358719   29302 kubeadm.go:1128] stopping kube-system containers ...
	I0914 19:05:55.358774   29302 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:55.380728   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.380744   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.380748   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.380752   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.380756   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.380760   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.380764   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.380768   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.380771   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.380776   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.380780   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.380783   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.380787   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.380790   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.380794   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.380798   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.381007   29302 docker.go:462] Stopping containers: [5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de]
	I0914 19:05:55.381063   29302 ssh_runner.go:195] Run: docker stop 5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de
	I0914 19:05:55.400500   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.400523   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.400528   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.400532   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.400537   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.400545   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.400549   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.400915   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.400933   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.400941   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.400947   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.400953   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.400959   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.400965   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.400970   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.400976   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.402045   29302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 19:05:55.416372   29302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 19:05:55.424910   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 19:05:55.424932   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 19:05:55.424943   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 19:05:55.424952   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.424980   29302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.425021   29302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433299   29302 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433317   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.549527   29302 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 19:05:55.549554   29302 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 19:05:55.549564   29302 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 19:05:55.549574   29302 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 19:05:55.549583   29302 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0914 19:05:55.549599   29302 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0914 19:05:55.549609   29302 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0914 19:05:55.549615   29302 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0914 19:05:55.549624   29302 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0914 19:05:55.549633   29302 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 19:05:55.549640   29302 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 19:05:55.549657   29302 command_runner.go:130] > [certs] Using the existing "sa" key
	I0914 19:05:55.549745   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.598988   29302 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 19:05:55.824313   29302 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 19:05:55.900894   29302 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 19:05:56.276915   29302 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 19:05:56.339928   29302 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 19:05:56.342661   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.405203   29302 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 19:05:56.406633   29302 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 19:05:56.407055   29302 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 19:05:56.524034   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.589683   29302 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 19:05:56.589714   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 19:05:56.593812   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 19:05:56.595032   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 19:05:56.597321   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.696497   29302 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 19:05:56.699815   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:05:56.699898   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:56.713289   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.226345   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.226095   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.226644   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.241067   29302 command_runner.go:130] > 1693
	I0914 19:05:59.241381   29302 api_server.go:72] duration metric: took 2.541565826s to wait for apiserver process to appear ...
	I0914 19:05:59.241402   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:05:59.241422   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.195757   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 19:06:02.195786   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 19:06:02.195796   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.307219   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.307250   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:02.807963   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.814842   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.814876   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.307503   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.315888   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:03.315914   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.807505   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.812721   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:03.812788   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:03.812794   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:03.812802   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:03.812809   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:03.821345   29302 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 19:06:03.821376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:03.821387   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:03.821396   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:03.821402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:03 GMT
	I0914 19:06:03.821410   29302 round_trippers.go:580]     Audit-Id: a2a9e97f-3007-4290-8f99-481d06fc6049
	I0914 19:06:03.821417   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:03.821424   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:03.821433   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:03.821483   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:03.821569   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:03.821589   29302 api_server.go:131] duration metric: took 4.580178903s to wait for apiserver health ...
	I0914 19:06:03.821600   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:06:03.821611   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:06:03.823525   29302 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 19:06:03.825085   29302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 19:06:03.832345   29302 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 19:06:03.832364   29302 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 19:06:03.832370   29302 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 19:06:03.832380   29302 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 19:06:03.832391   29302 command_runner.go:130] > Access: 2023-09-14 19:05:33.824543091 +0000
	I0914 19:06:03.832399   29302 command_runner.go:130] > Modify: 2023-09-12 03:24:25.000000000 +0000
	I0914 19:06:03.832416   29302 command_runner.go:130] > Change: 2023-09-14 19:05:31.874543091 +0000
	I0914 19:06:03.832422   29302 command_runner.go:130] >  Birth: -
	I0914 19:06:03.832466   29302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 19:06:03.832475   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 19:06:03.901488   29302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 19:06:05.205755   29302 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.209188   29302 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.212024   29302 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 19:06:05.225376   29302 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 19:06:05.229823   29302 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.32829993s)
	I0914 19:06:05.229853   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:05.229964   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.229975   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.229982   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.229988   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.234117   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.234139   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.234149   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.234158   29302 round_trippers.go:580]     Audit-Id: 78bdb13b-ed79-4db3-8008-4289bacf78fd
	I0914 19:06:05.234172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.234180   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.234188   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.234195   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.236145   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.239946   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:05.239984   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 19:06:05.239998   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 19:06:05.240008   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 19:06:05.240015   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:05.240026   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:05.240036   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 19:06:05.240054   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 19:06:05.240067   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:05.240073   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:05.240087   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 19:06:05.240101   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 19:06:05.240113   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 19:06:05.240123   29302 system_pods.go:74] duration metric: took 10.263188ms to wait for pod list to return data ...
	I0914 19:06:05.240135   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:05.240193   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:05.240202   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.240212   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.240223   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.245363   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:05.245382   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.245393   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.245401   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.245416   29302 round_trippers.go:580]     Audit-Id: ee9162aa-d308-4bb2-927d-55e7e1011d87
	I0914 19:06:05.245424   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.245435   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.245471   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.245800   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13790 chars]
	I0914 19:06:05.246934   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246965   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.246982   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246996   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247002   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.247012   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247020   29302 node_conditions.go:105] duration metric: took 6.879016ms to run NodePressure ...
	I0914 19:06:05.247043   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:06:05.487041   29302 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 19:06:05.487069   29302 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 19:06:05.487097   29302 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 19:06:05.487490   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0914 19:06:05.487506   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.487516   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.487526   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.491797   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.491820   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.491831   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.491840   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.491848   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.491857   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.491866   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.491875   29302 round_trippers.go:580]     Audit-Id: 9814298e-c189-437e-bfca-dbe0a19423d2
	I0914 19:06:05.492280   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29761 chars]
	I0914 19:06:05.493221   29302 kubeadm.go:787] kubelet initialised
	I0914 19:06:05.493240   29302 kubeadm.go:788] duration metric: took 6.131207ms waiting for restarted kubelet to initialise ...
	I0914 19:06:05.493249   29302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:05.493307   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.493322   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.493334   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.493347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.496849   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:05.496867   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.496876   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.496885   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.496892   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.496901   29302 round_trippers.go:580]     Audit-Id: a7031aa1-24df-4c90-9e52-85f8f96f783c
	I0914 19:06:05.496912   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.496921   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.497873   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.500273   29302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.500335   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:05.500343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.500350   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.500356   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.502411   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.502429   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.502441   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.502449   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.502459   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.502469   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.502478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.502490   29302 round_trippers.go:580]     Audit-Id: f347830a-65d2-4cb4-8423-8b8fc5cc870f
	I0914 19:06:05.502830   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:05.503304   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.503318   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.503328   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.503337   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.505839   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.505853   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.505864   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.505870   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.505875   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.505880   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.505886   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.505894   29302 round_trippers.go:580]     Audit-Id: 71902073-b1b8-4c71-b1d1-af71d48217f1
	I0914 19:06:05.506071   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.506467   29302 pod_ready.go:97] node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506490   29302 pod_ready.go:81] duration metric: took 6.199179ms waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.506501   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506518   29302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.506572   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:05.506583   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.506593   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.506606   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.508379   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.508391   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.508397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.508403   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.508408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.508414   29302 round_trippers.go:580]     Audit-Id: adfe03d4-2812-4ba5-98dd-67afaa529395
	I0914 19:06:05.508419   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.508425   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.508772   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:05.509094   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.509104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.509111   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.509116   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.510985   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.511003   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.511012   29302 round_trippers.go:580]     Audit-Id: 0ee321ba-916a-449f-a719-2eb1a4973cde
	I0914 19:06:05.511019   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.511028   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.511036   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.511044   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.511057   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.511184   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.511454   29302 pod_ready.go:97] node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511470   29302 pod_ready.go:81] duration metric: took 4.945047ms waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.511477   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511489   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.511533   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:05.511540   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.511546   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.511552   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.513172   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.513189   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.513198   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.513206   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.513213   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.513222   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.513230   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.513246   29302 round_trippers.go:580]     Audit-Id: 98886ad5-cb3e-42c1-9236-b75a8e09f5f5
	I0914 19:06:05.513380   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"786","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7850 chars]
	I0914 19:06:05.513760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.513773   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.513780   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.513786   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.515437   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.515456   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.515464   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.515472   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.515481   29302 round_trippers.go:580]     Audit-Id: cc794f2f-df9b-4b8c-8271-303fbb3bda2a
	I0914 19:06:05.515489   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.515502   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.515510   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.515753   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.516001   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516014   29302 pod_ready.go:81] duration metric: took 4.515313ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.516021   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516027   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.516066   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:05.516073   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.516080   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.516086   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.518245   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.518263   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.518277   29302 round_trippers.go:580]     Audit-Id: 6779b7f0-25f9-49d1-be85-87a44d8c3552
	I0914 19:06:05.518286   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.518294   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.518301   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.518314   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.518322   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.518564   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"783","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
	I0914 19:06:05.630264   29302 request.go:629] Waited for 111.324976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630352   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630359   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.630372   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.630382   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.632981   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.633000   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.633006   29302 round_trippers.go:580]     Audit-Id: fd7872d6-edd4-429f-97f2-b2ec1c12de54
	I0914 19:06:05.633012   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.633017   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.633023   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.633028   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.633036   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.633196   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.633629   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633656   29302 pod_ready.go:81] duration metric: took 117.619154ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.633669   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633680   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.830043   29302 request.go:629] Waited for 196.287848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830099   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830103   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.830111   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.830118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.832762   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.832785   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.832794   29302 round_trippers.go:580]     Audit-Id: 3c18be9a-6c71-4025-be83-5fc9c53246a5
	I0914 19:06:05.832801   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.832808   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.832815   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.832822   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.832829   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.833118   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:06.029994   29302 request.go:629] Waited for 196.460915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030079   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030087   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.030099   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.030108   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.032502   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.032520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.032527   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:06.032532   29302 round_trippers.go:580]     Audit-Id: 9d3f52cf-02ab-4abb-92c1-8a7d06224f0e
	I0914 19:06:06.032538   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.032542   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.032547   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.032553   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.032888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:06.033151   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.033165   29302 pod_ready.go:81] duration metric: took 399.477836ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.033173   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.230655   29302 request.go:629] Waited for 197.428191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230712   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230718   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.230725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.230733   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.233365   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.233384   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.233391   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.233397   29302 round_trippers.go:580]     Audit-Id: 53af8c6b-f3d3-4507-ba18-bcb4d7a95376
	I0914 19:06:06.233406   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.233422   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.233431   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.233443   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.233771   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:06.430710   29302 request.go:629] Waited for 196.348215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430762   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.430779   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.430788   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.433906   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:06.433930   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.433942   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.433951   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.433960   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.433969   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.433985   29302 round_trippers.go:580]     Audit-Id: 1280bf02-d81c-4bca-b4e5-275129840268
	I0914 19:06:06.433994   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.434112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"772","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3204 chars]
	I0914 19:06:06.434453   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.434474   29302 pod_ready.go:81] duration metric: took 401.294532ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.434488   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.630939   29302 request.go:629] Waited for 196.385647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631022   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631030   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.631042   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.631051   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.633497   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.633520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.633530   29302 round_trippers.go:580]     Audit-Id: 1dc1f940-384d-494a-8e64-361f1ad205ba
	I0914 19:06:06.633543   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.633552   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.633562   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.633573   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.633584   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.633766   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"788","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5928 chars]
	I0914 19:06:06.830679   29302 request.go:629] Waited for 196.393813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830735   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830740   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.830747   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.830754   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.833354   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.833375   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.833382   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.833387   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.833392   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.833397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.833402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.833407   29302 round_trippers.go:580]     Audit-Id: a24b66f4-fa51-4df4-9bc5-590f310c8108
	I0914 19:06:06.833985   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:06.834382   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834408   29302 pod_ready.go:81] duration metric: took 399.910926ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:06.834420   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834433   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:07.030857   29302 request.go:629] Waited for 196.352242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030940   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.030964   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.030977   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.034225   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.034245   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.034253   29302 round_trippers.go:580]     Audit-Id: 71cfae50-3c69-4f2b-8709-aad710c8dec2
	I0914 19:06:07.034260   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.034268   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.034276   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.034289   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.034298   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:07.034501   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:07.230128   29302 request.go:629] Waited for 195.265564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230211   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230221   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.230229   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.230235   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.233612   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.233631   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.233641   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.233648   29302 round_trippers.go:580]     Audit-Id: c6e16c92-92f1-4f61-b0d2-523db2c467d1
	I0914 19:06:07.233656   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.233665   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.233675   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.233684   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.234058   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.234344   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234368   29302 pod_ready.go:81] duration metric: took 399.923264ms waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:07.234381   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234393   29302 pod_ready.go:38] duration metric: took 1.741133779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:07.234417   29302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 19:06:07.250231   29302 command_runner.go:130] > -16
	I0914 19:06:07.250255   29302 ops.go:34] apiserver oom_adj: -16
	I0914 19:06:07.250263   29302 kubeadm.go:640] restartCluster took 21.909989817s
	I0914 19:06:07.250271   29302 kubeadm.go:406] StartCluster complete in 21.938026901s
	I0914 19:06:07.250290   29302 settings.go:142] acquiring lock: {Name:mkaf2d84e9fceec2029b98353d3d8cae1b369e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.250389   29302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.251059   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.251279   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 19:06:07.251383   29302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 19:06:07.253531   29302 out.go:177] * Enabled addons: 
	I0914 19:06:07.251517   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:07.251534   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.255467   29302 addons.go:502] enable addons completed in 4.093858ms: enabled=[]
	I0914 19:06:07.255670   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:06:07.255997   29302 round_trippers.go:463] GET https://192.168.39.14:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 19:06:07.256010   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.256017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.256025   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.263309   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:07.263329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.263340   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.263348   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.263354   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.263359   29302 round_trippers.go:580]     Content-Length: 291
	I0914 19:06:07.263365   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.263370   29302 round_trippers.go:580]     Audit-Id: 5a75d744-b3cd-40e6-abf4-7b1c8daac075
	I0914 19:06:07.263377   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.263397   29302 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9776e459-4280-488a-924c-4e921bbd9495","resourceVersion":"796","creationTimestamp":"2023-09-14T19:01:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 19:06:07.263508   29302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-040952" context rescaled to 1 replicas
	I0914 19:06:07.263529   29302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 19:06:07.264985   29302 out.go:177] * Verifying Kubernetes components...
	I0914 19:06:07.266359   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:07.389385   29302 command_runner.go:130] > apiVersion: v1
	I0914 19:06:07.389403   29302 command_runner.go:130] > data:
	I0914 19:06:07.389408   29302 command_runner.go:130] >   Corefile: |
	I0914 19:06:07.389411   29302 command_runner.go:130] >     .:53 {
	I0914 19:06:07.389415   29302 command_runner.go:130] >         log
	I0914 19:06:07.389421   29302 command_runner.go:130] >         errors
	I0914 19:06:07.389425   29302 command_runner.go:130] >         health {
	I0914 19:06:07.389429   29302 command_runner.go:130] >            lameduck 5s
	I0914 19:06:07.389433   29302 command_runner.go:130] >         }
	I0914 19:06:07.389437   29302 command_runner.go:130] >         ready
	I0914 19:06:07.389443   29302 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 19:06:07.389447   29302 command_runner.go:130] >            pods insecure
	I0914 19:06:07.389455   29302 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 19:06:07.389473   29302 command_runner.go:130] >            ttl 30
	I0914 19:06:07.389477   29302 command_runner.go:130] >         }
	I0914 19:06:07.389483   29302 command_runner.go:130] >         prometheus :9153
	I0914 19:06:07.389487   29302 command_runner.go:130] >         hosts {
	I0914 19:06:07.389493   29302 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0914 19:06:07.389497   29302 command_runner.go:130] >            fallthrough
	I0914 19:06:07.389501   29302 command_runner.go:130] >         }
	I0914 19:06:07.389508   29302 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 19:06:07.389513   29302 command_runner.go:130] >            max_concurrent 1000
	I0914 19:06:07.389517   29302 command_runner.go:130] >         }
	I0914 19:06:07.389520   29302 command_runner.go:130] >         cache 30
	I0914 19:06:07.389527   29302 command_runner.go:130] >         loop
	I0914 19:06:07.389532   29302 command_runner.go:130] >         reload
	I0914 19:06:07.389541   29302 command_runner.go:130] >         loadbalance
	I0914 19:06:07.389549   29302 command_runner.go:130] >     }
	I0914 19:06:07.389558   29302 command_runner.go:130] > kind: ConfigMap
	I0914 19:06:07.389564   29302 command_runner.go:130] > metadata:
	I0914 19:06:07.389573   29302 command_runner.go:130] >   creationTimestamp: "2023-09-14T19:01:40Z"
	I0914 19:06:07.389585   29302 command_runner.go:130] >   name: coredns
	I0914 19:06:07.389594   29302 command_runner.go:130] >   namespace: kube-system
	I0914 19:06:07.389604   29302 command_runner.go:130] >   resourceVersion: "404"
	I0914 19:06:07.389612   29302 command_runner.go:130] >   uid: 77b79b35-a304-4075-b4c4-6b8a52cfe75c
	I0914 19:06:07.389643   29302 node_ready.go:35] waiting up to 6m0s for node "multinode-040952" to be "Ready" ...
	I0914 19:06:07.389797   29302 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 19:06:07.431021   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.431047   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.431059   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.431069   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.434336   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.434359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.434367   29302 round_trippers.go:580]     Audit-Id: f0218504-ef8b-4fee-a836-3f16c97e6d1d
	I0914 19:06:07.434372   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.434378   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.434383   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.434389   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.434399   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.434888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.630657   29302 request.go:629] Waited for 195.358734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630713   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630720   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.630729   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.630738   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.635002   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:07.635021   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.635027   29302 round_trippers.go:580]     Audit-Id: 0e51cba7-34eb-44c3-be48-8785725a128f
	I0914 19:06:07.635033   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.635038   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.635043   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.635048   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.635053   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.635788   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.136884   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.136903   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.136913   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.136919   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.140137   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:08.140160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.140168   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.140173   29302 round_trippers.go:580]     Audit-Id: 9ec77217-1afd-42b6-aaf7-211e85629e48
	I0914 19:06:08.140179   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.140184   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.140189   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.140194   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.140344   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.637040   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.637079   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.637091   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.637101   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.639714   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:08.639733   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.639744   29302 round_trippers.go:580]     Audit-Id: d47f9fd4-8dec-46b1-8ce9-436c0350c5ca
	I0914 19:06:08.639752   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.639760   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.639769   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.639779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.639788   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.640112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.136649   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.136682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.136690   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.136696   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.139686   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.139704   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.139715   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.139724   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.139733   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.139739   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.139745   29302 round_trippers.go:580]     Audit-Id: ae97ecdc-ac59-4df9-80fb-ab01ff2852ec
	I0914 19:06:09.139750   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.140167   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.636845   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.636866   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.636874   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.636880   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.639508   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.639525   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.639534   29302 round_trippers.go:580]     Audit-Id: 2a2efe7f-361b-45a2-b3cb-a7e9e84043e9
	I0914 19:06:09.639541   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.639549   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.639558   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.639568   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.639578   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.639997   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.640405   29302 node_ready.go:58] node "multinode-040952" has status "Ready":"False"
	I0914 19:06:10.136599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.136624   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.136638   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.136648   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.140273   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.140297   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.140306   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.140313   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.140320   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.140332   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.140340   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.140347   29302 round_trippers.go:580]     Audit-Id: 1af6dc6d-a25f-4a81-86a3-d239224c606e
	I0914 19:06:10.140506   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.140798   29302 node_ready.go:49] node "multinode-040952" has status "Ready":"True"
	I0914 19:06:10.140815   29302 node_ready.go:38] duration metric: took 2.751153874s waiting for node "multinode-040952" to be "Ready" ...
	I0914 19:06:10.140825   29302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:10.140877   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:10.140887   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.140897   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.140907   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.145518   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:10.145535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.145542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.145547   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.145557   29302 round_trippers.go:580]     Audit-Id: d738ec8e-27bb-4210-8329-89e64df5055c
	I0914 19:06:10.145569   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.145579   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.145590   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.146881   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83954 chars]
	I0914 19:06:10.149263   29302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:10.149331   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.149342   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.149353   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.149364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.151221   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.151235   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.151241   29302 round_trippers.go:580]     Audit-Id: 9dce5aa8-17a9-43c4-9448-421e8ef000fe
	I0914 19:06:10.151247   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.151255   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.151264   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.151281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.151288   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.151447   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.151815   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.151829   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.151839   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.151847   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.154035   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.154047   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.154053   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.154058   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.154063   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.154069   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.154075   29302 round_trippers.go:580]     Audit-Id: f451201e-e118-40ff-8809-e06aa3aa8567
	I0914 19:06:10.154084   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.154352   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.154718   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.154731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.154742   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.154752   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.156468   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.156482   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.156491   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.156501   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.156513   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.156524   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.156538   29302 round_trippers.go:580]     Audit-Id: 056aca82-7d21-4539-9de8-316f54300fbb
	I0914 19:06:10.156548   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.156671   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.157120   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.157136   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.157147   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.157162   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.159000   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.159014   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.159023   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.159031   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.159039   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.159049   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.159059   29302 round_trippers.go:580]     Audit-Id: 053f7e6a-3d64-496b-a692-e6d8d7de77dc
	I0914 19:06:10.159074   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.159292   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.660315   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.660343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.660354   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.660364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.662669   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.662688   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.662694   29302 round_trippers.go:580]     Audit-Id: 0b5959bf-4f92-40f5-bff0-64259ee8d0e9
	I0914 19:06:10.662703   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.662711   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.662723   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.662732   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.662744   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.663162   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.663793   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.663810   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.663822   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.663830   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.667280   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.667294   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.667299   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.667304   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.667310   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.667315   29302 round_trippers.go:580]     Audit-Id: adc471fd-2452-48eb-9634-4a15a4129e27
	I0914 19:06:10.667320   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.667325   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.667519   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.160702   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.160731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.160744   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.160753   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.164208   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.164227   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.164234   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.164240   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.164261   29302 round_trippers.go:580]     Audit-Id: 3b81510c-ceb9-488e-bc2e-b21d77b051e2
	I0914 19:06:11.164273   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.164281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.164290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.164555   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.165152   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.165174   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.165187   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.165197   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.168098   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:11.168117   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.168125   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.168133   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.168142   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.168151   29302 round_trippers.go:580]     Audit-Id: 15145bd3-b367-4e99-b3ce-0ae58ef5c733
	I0914 19:06:11.168161   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.168168   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.168530   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.660168   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.660193   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.660205   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.660216   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.663403   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.663424   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.663434   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.663442   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.663449   29302 round_trippers.go:580]     Audit-Id: 3362ce2b-8605-45fd-8885-3eaeb408ef56
	I0914 19:06:11.663457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.663466   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.663476   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.664334   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.664760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.664775   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.664785   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.664795   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.671505   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:11.671522   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.671530   29302 round_trippers.go:580]     Audit-Id: 654293a2-0981-4bec-9543-4726a90c72a3
	I0914 19:06:11.671539   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.671551   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.671560   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.671567   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.671576   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.671723   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.160486   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.160512   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.160524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.160534   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.163604   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:12.163624   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.163634   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.163644   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.163652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.163661   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.163674   29302 round_trippers.go:580]     Audit-Id: 746f41fe-b54a-4602-ba74-6665d07e9fc7
	I0914 19:06:12.163683   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.164257   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.164698   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.164712   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.164721   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.164731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.166907   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.166920   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.166926   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.166934   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.166942   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.166953   29302 round_trippers.go:580]     Audit-Id: e83a6e6d-40cb-4779-8c0a-8f5c050ff286
	I0914 19:06:12.166961   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.166970   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.167376   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.167641   29302 pod_ready.go:102] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"False"
	I0914 19:06:12.660012   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.660034   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.660051   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.660059   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.664300   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:12.664327   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.664338   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.664345   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.664352   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.664360   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.664369   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.664384   29302 round_trippers.go:580]     Audit-Id: 49e3af30-584c-4ef5-942f-2f32701b7bc7
	I0914 19:06:12.665270   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.665705   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.665719   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.665729   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.665738   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.668068   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.668088   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.668097   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.668105   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.668112   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.668120   29302 round_trippers.go:580]     Audit-Id: 28f046b6-f759-4197-80f7-730e48f958ff
	I0914 19:06:12.668128   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.668142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.668260   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.159876   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:13.159904   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.159912   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.159918   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.163892   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:13.163917   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.163928   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.163937   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.163944   29302 round_trippers.go:580]     Audit-Id: 2bafd162-6571-48ef-8c6f-4b72770d2047
	I0914 19:06:13.163952   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.163966   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.163976   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.165138   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0914 19:06:13.165753   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.165771   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.165782   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.165791   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.168088   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.168105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.168112   29302 round_trippers.go:580]     Audit-Id: 767659c2-2c07-4c69-b006-9d19ff6d9f6d
	I0914 19:06:13.168118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.168123   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.168128   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.168135   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.168143   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.168401   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.168681   29302 pod_ready.go:92] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:13.168695   29302 pod_ready.go:81] duration metric: took 3.01941396s waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168703   29302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168801   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.168814   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.168832   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.168846   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.171347   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.171368   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.171375   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.171380   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.171388   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.171397   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.171404   29302 round_trippers.go:580]     Audit-Id: b18d0768-dc31-460c-beed-e50e3a19d6cf
	I0914 19:06:13.171411   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.172044   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.172379   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.172391   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.172399   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.172405   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.175143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.175157   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.175163   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.175168   29302 round_trippers.go:580]     Audit-Id: f6242de5-c366-4c79-aa4f-5b2c5ce0d01e
	I0914 19:06:13.175174   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.175182   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.175190   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.175200   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.176009   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.176284   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.176295   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.176301   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.176307   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.178355   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.178376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.178382   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.178387   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.178393   29302 round_trippers.go:580]     Audit-Id: 8172c157-f43e-42e0-b3a6-8cbd28c89432
	I0914 19:06:13.178401   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.178409   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.178417   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.178832   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.179275   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.179292   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.179302   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.179309   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.180983   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:13.180994   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.180999   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.181004   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.181009   29302 round_trippers.go:580]     Audit-Id: 7d797daa-6bd3-4f35-8046-01886aa5fa4e
	I0914 19:06:13.181014   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.181019   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.181024   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.181219   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.682300   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.682333   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.682342   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.682347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.685143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.685160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.685166   29302 round_trippers.go:580]     Audit-Id: 0910f73d-781a-443b-b8e1-0d453e50ba92
	I0914 19:06:13.685172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.685177   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.685182   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.685187   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.685192   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.685503   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.685920   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.685934   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.685941   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.685947   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.688227   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.688240   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.688246   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.688252   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.688260   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.688268   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.688281   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.688288   29302 round_trippers.go:580]     Audit-Id: 078b7d2a-29bc-4729-9a02-7236c4049ad7
	I0914 19:06:13.688474   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.182102   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.182125   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.182133   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.182140   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.187517   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:14.187544   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.187554   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.187562   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.187569   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.187577   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.187586   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.187594   29302 round_trippers.go:580]     Audit-Id: dd780464-2280-4b93-b398-b175b603d0fe
	I0914 19:06:14.188035   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:14.188554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.188572   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.188583   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.188592   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.190606   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.190620   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.190626   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.190632   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.190637   29302 round_trippers.go:580]     Audit-Id: 104efd51-1025-4755-af8b-f207cfcdb912
	I0914 19:06:14.190642   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.190647   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.190652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.190979   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.682687   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.682711   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.682719   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.682725   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.690728   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:14.690764   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.690775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.690783   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.690791   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.690799   29302 round_trippers.go:580]     Audit-Id: 4dc518a5-6cbd-4561-8ed6-e72b82b2abda
	I0914 19:06:14.690806   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.690814   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.690995   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"887","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
	I0914 19:06:14.691406   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.691420   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.691427   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.691433   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.697743   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:14.697765   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.697774   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.697779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.697784   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.697789   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.697794   29302 round_trippers.go:580]     Audit-Id: 07d3511e-72f3-415a-b985-0c38f9c2dc48
	I0914 19:06:14.697799   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.698080   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.698416   29302 pod_ready.go:92] pod "etcd-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.698432   29302 pod_ready.go:81] duration metric: took 1.529723471s waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698448   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698508   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:14.698517   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.698524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.698530   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.703391   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:14.703406   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.703412   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.703418   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.703423   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.703428   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.703433   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.703439   29302 round_trippers.go:580]     Audit-Id: 0b9ff4df-c192-426d-837d-19a8ddc6d994
	I0914 19:06:14.703718   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"871","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7606 chars]
	I0914 19:06:14.704127   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.704140   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.704147   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.704153   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.706425   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.706444   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.706451   29302 round_trippers.go:580]     Audit-Id: 6eee19bb-2b91-4350-b2ae-7edfbd41930d
	I0914 19:06:14.706457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.706462   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.706467   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.706472   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.706478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.706615   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.706908   29302 pod_ready.go:92] pod "kube-apiserver-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.706921   29302 pod_ready.go:81] duration metric: took 8.465952ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706930   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706986   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:14.706996   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.707007   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.707017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.710085   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:14.710105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.710115   29302 round_trippers.go:580]     Audit-Id: 37a4af49-de22-42c5-8342-96bdccfba829
	I0914 19:06:14.710126   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.710135   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.710143   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.710152   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.710160   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.710726   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"884","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
	I0914 19:06:14.830503   29302 request.go:629] Waited for 119.282235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830558   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.830566   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.830572   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.833064   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.833083   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.833090   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.833095   29302 round_trippers.go:580]     Audit-Id: 7a8584d4-7b4d-4f0c-a673-2711303dfb2c
	I0914 19:06:14.833100   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.833106   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.833110   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.833116   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.833241   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.833562   29302 pod_ready.go:92] pod "kube-controller-manager-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.833577   29302 pod_ready.go:81] duration metric: took 126.641384ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.833587   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.030888   29302 request.go:629] Waited for 197.237265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030946   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.030960   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.030966   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.034339   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.034359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.034366   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.034374   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.034386   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.034394   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.034408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:15.034416   29302 round_trippers.go:580]     Audit-Id: 3c39cfc6-1f06-4726-9679-50e437a9b84d
	I0914 19:06:15.034690   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:15.230480   29302 request.go:629] Waited for 195.333524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230552   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230557   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.230565   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.230574   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.234304   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.234329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.234339   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.234347   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.234359   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.234366   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.234377   29302 round_trippers.go:580]     Audit-Id: 4a324e73-8fa1-482f-bde6-ae80be99f721
	I0914 19:06:15.234386   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.234528   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:15.234774   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.234787   29302 pod_ready.go:81] duration metric: took 401.195035ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.234796   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.430003   29302 request.go:629] Waited for 195.152769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430096   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.430118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.430142   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.433237   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.433271   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.433281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.433290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.433300   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.433309   29302 round_trippers.go:580]     Audit-Id: 92d372f9-e9c9-4d13-8b75-1b3ebd7f2435
	I0914 19:06:15.433321   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.433329   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.433627   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:15.630434   29302 request.go:629] Waited for 196.369841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630534   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630546   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.630557   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.630568   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.633799   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.633824   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.633834   29302 round_trippers.go:580]     Audit-Id: 8ea32575-14e9-412a-ba38-fd00269447f5
	I0914 19:06:15.633844   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.633852   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.633864   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.633873   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.633887   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.634144   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"891","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3084 chars]
	I0914 19:06:15.634401   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.634416   29302 pod_ready.go:81] duration metric: took 399.614214ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.634430   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.830846   29302 request.go:629] Waited for 196.353294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830928   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830933   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.830945   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.830952   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.834221   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.834246   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.834259   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.834267   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.834274   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.834282   29302 round_trippers.go:580]     Audit-Id: 44182567-ce38-4fce-a842-f78410d89ee9
	I0914 19:06:15.834289   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.834298   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.834802   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"798","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I0914 19:06:16.030675   29302 request.go:629] Waited for 195.45562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030731   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030736   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.030743   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.030750   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.034236   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.034260   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.034267   29302 round_trippers.go:580]     Audit-Id: e468604d-7ce9-469a-b812-ed3c9c650d6e
	I0914 19:06:16.034275   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.034281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.034286   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.034291   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.034297   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.034614   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.034941   29302 pod_ready.go:92] pod "kube-proxy-hbsmt" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:16.034956   29302 pod_ready.go:81] duration metric: took 400.519289ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.034964   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.230342   29302 request.go:629] Waited for 195.324407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230449   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230454   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.230462   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.230470   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.233547   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.233564   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.233572   29302 round_trippers.go:580]     Audit-Id: 224fde99-6866-4d6c-81fe-2f97bc0c6734
	I0914 19:06:16.233577   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.233587   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.233592   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.233597   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.233602   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.233823   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.430509   29302 request.go:629] Waited for 196.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430573   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430580   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.430590   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.430600   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.433517   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.433535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.433542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.433559   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.433565   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.433571   29302 round_trippers.go:580]     Audit-Id: 1da1d693-84a7-4480-b07f-7a386588f044
	I0914 19:06:16.433576   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.433581   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.433983   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.630679   29302 request.go:629] Waited for 196.348452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630764   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.630776   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.630783   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.633557   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.633575   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.633582   29302 round_trippers.go:580]     Audit-Id: 2136e32a-148d-4e1d-825d-95e56e17f7f3
	I0914 19:06:16.633589   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.633597   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.633605   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.633612   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.633629   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.634402   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.830072   29302 request.go:629] Waited for 195.313935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830145   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830152   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.830160   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.830168   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.832962   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.832981   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.832988   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.832993   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.832998   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.833006   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.833011   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.833016   29302 round_trippers.go:580]     Audit-Id: 685468aa-007f-4cd0-908f-286f4b9b8738
	I0914 19:06:16.833566   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.334599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.334622   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.334645   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.334652   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.337790   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:17.337810   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.337817   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.337823   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.337828   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.337835   29302 round_trippers.go:580]     Audit-Id: 13885e51-e7a2-41bd-a4e6-27c1810b7f5b
	I0914 19:06:17.337843   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.337850   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.338071   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.338439   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.338455   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.338465   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.338474   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.340824   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.340837   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.340843   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.340848   29302 round_trippers.go:580]     Audit-Id: e2df7950-3f43-43ac-a2ff-9ebcb6aba048
	I0914 19:06:17.340854   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.340862   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.340871   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.340883   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.341277   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.834981   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.835006   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.835015   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.835021   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.837948   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.837973   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.837984   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.837992   29302 round_trippers.go:580]     Audit-Id: bf96bd3c-445d-4267-b684-9a852b7ce0ca
	I0914 19:06:17.838000   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.838008   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.838020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.838027   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.838816   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.839223   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.839236   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.839244   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.839250   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.842020   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.842042   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.842052   29302 round_trippers.go:580]     Audit-Id: 58f6c61f-2107-4d49-bc25-beaf577ebc0b
	I0914 19:06:17.842063   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.842073   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.842084   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.842094   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.842104   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.842191   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.334912   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:18.334936   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.334944   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.334950   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.337727   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:18.337753   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.337763   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.337772   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.337784   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.337793   29302 round_trippers.go:580]     Audit-Id: 91452a7a-9433-48f7-bb48-08448530a97b
	I0914 19:06:18.337804   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.337811   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.338243   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"894","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
	I0914 19:06:18.338636   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:18.338654   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.338664   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.338674   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.342026   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.342059   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.342068   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.342078   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.342085   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.342096   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.342104   29302 round_trippers.go:580]     Audit-Id: a5dad678-33fe-4c2f-a5f5-c10a6380266e
	I0914 19:06:18.342118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.342444   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.342720   29302 pod_ready.go:92] pod "kube-scheduler-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:18.342732   29302 pod_ready.go:81] duration metric: took 2.30776305s waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:18.342741   29302 pod_ready.go:38] duration metric: took 8.201906021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:18.342758   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:06:18.342802   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:06:18.356335   29302 command_runner.go:130] > 1693
	I0914 19:06:18.356824   29302 api_server.go:72] duration metric: took 11.093271286s to wait for apiserver process to appear ...
	I0914 19:06:18.356842   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:06:18.356862   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:18.362653   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:18.362710   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:18.362717   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.362725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.362731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.363650   29302 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 19:06:18.363667   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.363677   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.363686   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.363694   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:18.363711   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.363719   29302 round_trippers.go:580]     Audit-Id: 01d336c4-24b2-4b6e-a634-c932a4f80f56
	I0914 19:06:18.363728   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.363733   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.363748   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:18.363790   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:18.363805   29302 api_server.go:131] duration metric: took 6.957442ms to wait for apiserver health ...
	I0914 19:06:18.363814   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:18.363875   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.363883   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.363889   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.363900   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.367955   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.367989   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.367997   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.368005   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.368013   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.368025   29302 round_trippers.go:580]     Audit-Id: 4a4def47-e1cc-4f97-a173-69327418d154
	I0914 19:06:18.368035   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.368044   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.369884   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.373265   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:18.373287   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.373292   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.373296   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.373299   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.373303   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.373307   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.373312   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.373315   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.373326   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.373335   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.373339   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.373342   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.373347   29302 system_pods.go:74] duration metric: took 9.528517ms to wait for pod list to return data ...
	I0914 19:06:18.373355   29302 default_sa.go:34] waiting for default service account to be created ...
	I0914 19:06:18.430623   29302 request.go:629] Waited for 57.191118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430678   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.430689   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.430695   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.433750   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.433768   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.433775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.433780   29302 round_trippers.go:580]     Content-Length: 261
	I0914 19:06:18.433785   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.433790   29302 round_trippers.go:580]     Audit-Id: f58f454f-de35-4fde-b782-3e31600d0a05
	I0914 19:06:18.433795   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.433803   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.433808   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.433825   29302 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"751abfd7-43aa-4bf5-a223-71659884f01c","resourceVersion":"335","creationTimestamp":"2023-09-14T19:01:53Z"}}]}
	I0914 19:06:18.433967   29302 default_sa.go:45] found service account: "default"
	I0914 19:06:18.433981   29302 default_sa.go:55] duration metric: took 60.621039ms for default service account to be created ...
	I0914 19:06:18.433987   29302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 19:06:18.630408   29302 request.go:629] Waited for 196.359387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630467   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630472   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.630480   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.630486   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.635088   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.635116   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.635126   29302 round_trippers.go:580]     Audit-Id: 40dbf5e6-bdfd-4c25-924c-528834eef0a7
	I0914 19:06:18.635135   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.635142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.635150   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.635159   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.635173   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.636346   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.639989   29302 system_pods.go:86] 12 kube-system pods found
	I0914 19:06:18.640017   29302 system_pods.go:89] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.640024   29302 system_pods.go:89] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.640031   29302 system_pods.go:89] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.640037   29302 system_pods.go:89] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.640043   29302 system_pods.go:89] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.640050   29302 system_pods.go:89] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.640058   29302 system_pods.go:89] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.640064   29302 system_pods.go:89] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.640071   29302 system_pods.go:89] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.640080   29302 system_pods.go:89] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.640088   29302 system_pods.go:89] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.640095   29302 system_pods.go:89] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.640110   29302 system_pods.go:126] duration metric: took 206.118337ms to wait for k8s-apps to be running ...
	I0914 19:06:18.640118   29302 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 19:06:18.640169   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:18.654395   29302 system_svc.go:56] duration metric: took 14.272365ms WaitForService to wait for kubelet.
	I0914 19:06:18.654416   29302 kubeadm.go:581] duration metric: took 11.390867757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 19:06:18.654443   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:18.830833   29302 request.go:629] Waited for 176.33044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830908   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830915   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.830925   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.830934   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.833992   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.834011   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.834020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.834029   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.834038   29302 round_trippers.go:580]     Audit-Id: 78eec727-aee2-400e-8c95-4146a9496a91
	I0914 19:06:18.834047   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.834056   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.834064   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.834284   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0914 19:06:18.835016   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835038   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835048   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835052   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835058   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835067   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835073   29302 node_conditions.go:105] duration metric: took 180.624501ms to run NodePressure ...
	I0914 19:06:18.835093   29302 start.go:228] waiting for startup goroutines ...
	I0914 19:06:18.835102   29302 start.go:233] waiting for cluster config update ...
	I0914 19:06:18.835115   29302 start.go:242] writing updated cluster config ...
	I0914 19:06:18.835683   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:18.835796   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.838910   29302 out.go:177] * Starting worker node multinode-040952-m02 in cluster multinode-040952
	I0914 19:06:18.840147   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:06:18.840163   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:06:18.840249   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:06:18.840261   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:06:18.840334   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.840476   29302 start.go:365] acquiring machines lock for multinode-040952-m02: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:06:18.840512   29302 start.go:369] acquired machines lock for "multinode-040952-m02" in 19.707µs
	I0914 19:06:18.840566   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:06:18.840575   29302 fix.go:54] fixHost starting: m02
	I0914 19:06:18.840830   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:18.840857   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:18.855469   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0914 19:06:18.855890   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:18.856329   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:06:18.856352   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:18.856677   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:18.856891   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:18.857065   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:06:18.858712   29302 fix.go:102] recreateIfNeeded on multinode-040952-m02: state=Stopped err=<nil>
	I0914 19:06:18.858735   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	W0914 19:06:18.858914   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:06:18.861118   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952-m02" ...
	I0914 19:06:18.862649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .Start
	I0914 19:06:18.862832   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring networks are active...
	I0914 19:06:18.863554   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network default is active
	I0914 19:06:18.863887   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network mk-multinode-040952 is active
	I0914 19:06:18.864247   29302 main.go:141] libmachine: (multinode-040952-m02) Getting domain xml...
	I0914 19:06:18.864791   29302 main.go:141] libmachine: (multinode-040952-m02) Creating domain...
	I0914 19:06:20.114677   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting to get IP...
	I0914 19:06:20.115697   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.116116   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.116177   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.116093   29537 retry.go:31] will retry after 292.793167ms: waiting for machine to come up
	I0914 19:06:20.410624   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.411041   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.411062   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.411011   29537 retry.go:31] will retry after 329.185161ms: waiting for machine to come up
	I0914 19:06:20.741486   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.741956   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.741984   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.741922   29537 retry.go:31] will retry after 372.179082ms: waiting for machine to come up
	I0914 19:06:21.115108   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.115492   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.115522   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.115446   29537 retry.go:31] will retry after 552.546331ms: waiting for machine to come up
	I0914 19:06:21.669165   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.669673   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.669702   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.669630   29537 retry.go:31] will retry after 641.98724ms: waiting for machine to come up
	I0914 19:06:22.313770   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:22.314305   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:22.314344   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:22.314258   29537 retry.go:31] will retry after 792.672163ms: waiting for machine to come up
	I0914 19:06:23.108201   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.108628   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.108656   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.108582   29537 retry.go:31] will retry after 820.609535ms: waiting for machine to come up
	I0914 19:06:23.930887   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.931350   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.931383   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.931293   29537 retry.go:31] will retry after 933.919914ms: waiting for machine to come up
	I0914 19:06:24.866306   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:24.866762   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:24.866796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:24.866720   29537 retry.go:31] will retry after 1.175445783s: waiting for machine to come up
	I0914 19:06:26.044181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:26.044639   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:26.044674   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:26.044595   29537 retry.go:31] will retry after 1.659114662s: waiting for machine to come up
	I0914 19:06:27.705347   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:27.705796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:27.705832   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:27.705738   29537 retry.go:31] will retry after 2.838813162s: waiting for machine to come up
	I0914 19:06:30.546592   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:30.547049   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:30.547092   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:30.547042   29537 retry.go:31] will retry after 2.43743272s: waiting for machine to come up
	I0914 19:06:32.987818   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:32.988277   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:32.988300   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:32.988246   29537 retry.go:31] will retry after 4.479558003s: waiting for machine to come up
	I0914 19:06:37.471961   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472352   29302 main.go:141] libmachine: (multinode-040952-m02) Found IP for machine: 192.168.39.16
	I0914 19:06:37.472379   29302 main.go:141] libmachine: (multinode-040952-m02) Reserving static IP address...
	I0914 19:06:37.472392   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472813   29302 main.go:141] libmachine: (multinode-040952-m02) Reserved static IP address: 192.168.39.16
	I0914 19:06:37.472867   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.472882   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting for SSH to be available...
	I0914 19:06:37.472912   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"}
	I0914 19:06:37.472930   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Getting to WaitForSSH function...
	I0914 19:06:37.474853   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475216   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.475243   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475331   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH client type: external
	I0914 19:06:37.475371   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa (-rw-------)
	I0914 19:06:37.475423   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:06:37.475447   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | About to run SSH command:
	I0914 19:06:37.475460   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | exit 0
	I0914 19:06:37.565151   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 19:06:37.565511   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetConfigRaw
	I0914 19:06:37.566140   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.568703   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569097   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.569132   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569351   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:37.569551   29302 machine.go:88] provisioning docker machine ...
	I0914 19:06:37.569568   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:37.569768   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.569927   29302 buildroot.go:166] provisioning hostname "multinode-040952-m02"
	I0914 19:06:37.569954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.570118   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.572245   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572611   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.572640   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572754   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.572896   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573067   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573182   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.573336   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.573757   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.573780   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952-m02 && echo "multinode-040952-m02" | sudo tee /etc/hostname
	I0914 19:06:37.710270   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952-m02
	
	I0914 19:06:37.710294   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.712933   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713287   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.713322   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713438   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.713649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713830   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713965   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.714153   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.714540   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.714569   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:06:37.850271   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:06:37.850302   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:06:37.850321   29302 buildroot.go:174] setting up certificates
	I0914 19:06:37.850331   29302 provision.go:83] configureAuth start
	I0914 19:06:37.850343   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.850630   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.853071   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853477   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.853512   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853665   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.855889   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856295   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.856327   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856394   29302 provision.go:138] copyHostCerts
	I0914 19:06:37.856430   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856463   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:06:37.856473   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856544   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:06:37.856653   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856672   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:06:37.856676   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856699   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:06:37.856741   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856756   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:06:37.856762   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856781   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:06:37.856823   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952-m02 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube multinode-040952-m02]
	I0914 19:06:37.904344   29302 provision.go:172] copyRemoteCerts
	I0914 19:06:37.904397   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:06:37.904417   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.906652   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.906972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.907008   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.907156   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.907312   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.907470   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.907613   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:38.000649   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:06:38.000741   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:06:38.025953   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:06:38.026028   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 19:06:38.048996   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:06:38.049067   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 19:06:38.072478   29302 provision.go:86] duration metric: configureAuth took 222.133675ms
	I0914 19:06:38.072507   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:06:38.072712   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:38.072733   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:38.072954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.075633   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.075959   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.076005   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.076116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.076304   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076482   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076626   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.076778   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.077069   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.077082   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:06:38.199048   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:06:38.199074   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:06:38.199195   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:06:38.199220   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.201601   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.201971   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.201992   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.202160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.202374   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202529   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202642   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.202785   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.203087   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.203150   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:06:38.339052   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:06:38.339081   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.341807   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342226   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.342261   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342430   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.342621   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342798   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.343119   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.343432   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.343461   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:06:39.223778   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:06:39.223805   29302 machine.go:91] provisioned docker machine in 1.654241082s
	I0914 19:06:39.223818   29302 start.go:300] post-start starting for "multinode-040952-m02" (driver="kvm2")
	I0914 19:06:39.223828   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:06:39.223843   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.224176   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:06:39.224211   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.226901   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227247   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.227280   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227544   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.227745   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.227911   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.228053   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.321534   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:06:39.325932   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:06:39.325948   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:06:39.325957   29302 command_runner.go:130] > ID=buildroot
	I0914 19:06:39.325962   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:06:39.325972   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:06:39.326365   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:06:39.326381   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:06:39.326432   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:06:39.326501   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:06:39.326513   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:06:39.326584   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:06:39.336967   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:06:39.360557   29302 start.go:303] post-start completed in 136.725285ms
	I0914 19:06:39.360581   29302 fix.go:56] fixHost completed within 20.520003113s
	I0914 19:06:39.360605   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.362948   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363269   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.363315   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363388   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.363595   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363783   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363936   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.364099   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:39.364460   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:39.364472   29302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0914 19:06:39.486077   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718399.434257584
	
	I0914 19:06:39.486101   29302 fix.go:206] guest clock: 1694718399.434257584
	I0914 19:06:39.486110   29302 fix.go:219] Guest: 2023-09-14 19:06:39.434257584 +0000 UTC Remote: 2023-09-14 19:06:39.360584834 +0000 UTC m=+78.429360914 (delta=73.67275ms)
	I0914 19:06:39.486128   29302 fix.go:190] guest clock delta is within tolerance: 73.67275ms
	I0914 19:06:39.486135   29302 start.go:83] releasing machines lock for "multinode-040952-m02", held for 20.645613984s
	I0914 19:06:39.486160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.486442   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:39.488972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.489301   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.489321   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.491933   29302 out.go:177] * Found network options:
	I0914 19:06:39.493577   29302 out.go:177]   - NO_PROXY=192.168.39.14
	W0914 19:06:39.495217   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.495254   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495809   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495995   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.496072   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:06:39.496116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	W0914 19:06:39.496205   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.496278   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:06:39.496299   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.498773   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.498969   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499150   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499303   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499318   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499348   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499474   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499542   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499625   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499690   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499747   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.499829   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499990   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.587315   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:06:39.587941   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:06:39.588006   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:06:39.610801   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:06:39.610851   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:06:39.610876   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:06:39.610891   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.610989   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:39.629605   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:06:39.630150   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:06:39.641201   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:06:39.651880   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:06:39.651937   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:06:39.663251   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.674202   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:06:39.685211   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.696908   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:06:39.709126   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:06:39.721014   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:06:39.731728   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:06:39.731788   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:06:39.742220   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:39.854266   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:06:39.871417   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.871488   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:06:39.884609   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:06:39.884650   29302 command_runner.go:130] > [Unit]
	I0914 19:06:39.884657   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:06:39.884663   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:06:39.884669   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:06:39.884677   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:06:39.884682   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:06:39.884689   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:06:39.884693   29302 command_runner.go:130] > [Service]
	I0914 19:06:39.884698   29302 command_runner.go:130] > Type=notify
	I0914 19:06:39.884702   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:06:39.884708   29302 command_runner.go:130] > Environment=NO_PROXY=192.168.39.14
	I0914 19:06:39.884715   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:06:39.884726   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:06:39.884735   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:06:39.884743   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:06:39.884752   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:06:39.884761   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:06:39.884768   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:06:39.884787   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:06:39.884796   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:06:39.884802   29302 command_runner.go:130] > ExecStart=
	I0914 19:06:39.884821   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:06:39.884831   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:06:39.884838   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:06:39.884845   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:06:39.884852   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:06:39.884856   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:06:39.884862   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:06:39.884867   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:06:39.884875   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:06:39.884879   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:06:39.884888   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:06:39.884894   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:06:39.884898   29302 command_runner.go:130] > Delegate=yes
	I0914 19:06:39.884905   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:06:39.884917   29302 command_runner.go:130] > KillMode=process
	I0914 19:06:39.884923   29302 command_runner.go:130] > [Install]
	I0914 19:06:39.884929   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:06:39.885921   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.902340   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:06:39.919241   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.931882   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.944141   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:06:39.980328   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.993054   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:40.010119   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:06:40.010413   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:06:40.014171   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:06:40.014287   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:06:40.024688   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:06:40.042167   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:06:40.160404   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:06:40.272827   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:06:40.272855   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:06:40.289795   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:40.398781   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:06:41.803191   29302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.40437357s)
	I0914 19:06:41.803251   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:41.905435   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:06:42.032291   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:42.160622   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:42.277173   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:06:42.292786   29302 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0914 19:06:42.294889   29302 out.go:177] 
	W0914 19:06:42.296193   29302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0914 19:06:42.296210   29302 out.go:239] * 
	* 
	W0914 19:06:42.297001   29302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 19:06:42.298210   29302 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-040952" : exit status 90
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040952
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-040952 -n multinode-040952
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 logs -n 25: (1.260679982s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952:/home/docker/cp-test_multinode-040952-m02_multinode-040952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952 sudo cat                                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m02_multinode-040952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03:/home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952-m03 sudo cat                                   | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp testdata/cp-test.txt                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952:/home/docker/cp-test_multinode-040952-m03_multinode-040952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952 sudo cat                                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m03_multinode-040952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02:/home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952-m02 sudo cat                                   | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-040952 node stop m03                                                          | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	| node    | multinode-040952 node start                                                             | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-040952                                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC |                     |
	| stop    | -p multinode-040952                                                                     | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:05 UTC |
	| start   | -p multinode-040952                                                                     | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:05 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-040952                                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 19:05:20
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 19:05:20.962804   29302 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:05:20.963060   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963070   29302 out.go:309] Setting ErrFile to fd 2...
	I0914 19:05:20.963075   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963243   29302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:05:20.963781   29302 out.go:303] Setting JSON to false
	I0914 19:05:20.964724   29302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2870,"bootTime":1694715451,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 19:05:20.964780   29302 start.go:138] virtualization: kvm guest
	I0914 19:05:20.967109   29302 out.go:177] * [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 19:05:20.968562   29302 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 19:05:20.969984   29302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 19:05:20.968648   29302 notify.go:220] Checking for updates...
	I0914 19:05:20.972859   29302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:20.974265   29302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 19:05:20.975509   29302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 19:05:20.976805   29302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 19:05:20.978678   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:20.978756   29302 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 19:05:20.979122   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:20.979158   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:20.994127   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0914 19:05:20.994544   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:20.994996   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:20.995035   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:20.995534   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:20.995713   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.030837   29302 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 19:05:21.032222   29302 start.go:298] selected driver: kvm2
	I0914 19:05:21.032235   29302 start.go:902] validating driver "kvm2" against &{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:21.032388   29302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 19:05:21.032684   29302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.032744   29302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 19:05:21.046926   29302 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 19:05:21.047549   29302 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 19:05:21.047615   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:21.047628   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:21.047635   29302 start_flags.go:321] config:
	{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0914 19:05:21.047846   29302 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.049820   29302 out.go:177] * Starting control plane node multinode-040952 in cluster multinode-040952
	I0914 19:05:21.051078   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:21.051117   29302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0914 19:05:21.051132   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:05:21.051200   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:05:21.051211   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:05:21.051357   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:21.051546   29302 start.go:365] acquiring machines lock for multinode-040952: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:05:21.051585   29302 start.go:369] acquired machines lock for "multinode-040952" in 22.658µs
	I0914 19:05:21.051598   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:05:21.051604   29302 fix.go:54] fixHost starting: 
	I0914 19:05:21.051851   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:21.051877   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:21.065211   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
	I0914 19:05:21.065673   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:21.066137   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:21.066161   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:21.066462   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:21.066623   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.066770   29302 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:05:21.068116   29302 fix.go:102] recreateIfNeeded on multinode-040952: state=Stopped err=<nil>
	I0914 19:05:21.068149   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	W0914 19:05:21.068327   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:05:21.070143   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952" ...
	I0914 19:05:21.071437   29302 main.go:141] libmachine: (multinode-040952) Calling .Start
	I0914 19:05:21.071593   29302 main.go:141] libmachine: (multinode-040952) Ensuring networks are active...
	I0914 19:05:21.072249   29302 main.go:141] libmachine: (multinode-040952) Ensuring network default is active
	I0914 19:05:21.072599   29302 main.go:141] libmachine: (multinode-040952) Ensuring network mk-multinode-040952 is active
	I0914 19:05:21.072924   29302 main.go:141] libmachine: (multinode-040952) Getting domain xml...
	I0914 19:05:21.073627   29302 main.go:141] libmachine: (multinode-040952) Creating domain...
	I0914 19:05:22.290792   29302 main.go:141] libmachine: (multinode-040952) Waiting to get IP...
	I0914 19:05:22.291697   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.292055   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.292102   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.292035   29331 retry.go:31] will retry after 308.296154ms: waiting for machine to come up
	I0914 19:05:22.601636   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.602066   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.602099   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.602024   29331 retry.go:31] will retry after 317.837388ms: waiting for machine to come up
	I0914 19:05:22.921508   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.921867   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.921901   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.921847   29331 retry.go:31] will retry after 471.086167ms: waiting for machine to come up
	I0914 19:05:23.394404   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.394838   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.394871   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.394792   29331 retry.go:31] will retry after 484.306086ms: waiting for machine to come up
	I0914 19:05:23.880204   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.880564   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.880583   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.880535   29331 retry.go:31] will retry after 618.601122ms: waiting for machine to come up
	I0914 19:05:24.500881   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:24.501312   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:24.501338   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:24.501260   29331 retry.go:31] will retry after 909.340951ms: waiting for machine to come up
	I0914 19:05:25.412225   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:25.412602   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:25.412643   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:25.412551   29331 retry.go:31] will retry after 1.126879825s: waiting for machine to come up
	I0914 19:05:26.540657   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:26.541060   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:26.541092   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:26.541009   29331 retry.go:31] will retry after 1.102019824s: waiting for machine to come up
	I0914 19:05:27.644123   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:27.644509   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:27.644533   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:27.644464   29331 retry.go:31] will retry after 1.486754446s: waiting for machine to come up
	I0914 19:05:29.133039   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:29.133510   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:29.133535   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:29.133470   29331 retry.go:31] will retry after 2.117464983s: waiting for machine to come up
	I0914 19:05:31.252796   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:31.253157   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:31.253189   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:31.253114   29331 retry.go:31] will retry after 2.386416431s: waiting for machine to come up
	I0914 19:05:33.642490   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:33.643052   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:33.643079   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:33.643013   29331 retry.go:31] will retry after 2.611013914s: waiting for machine to come up
	I0914 19:05:36.255832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:36.256237   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:36.256259   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:36.256195   29331 retry.go:31] will retry after 4.317080822s: waiting for machine to come up
	I0914 19:05:40.578744   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579178   29302 main.go:141] libmachine: (multinode-040952) Found IP for machine: 192.168.39.14
	I0914 19:05:40.579199   29302 main.go:141] libmachine: (multinode-040952) Reserving static IP address...
	I0914 19:05:40.579208   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has current primary IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579755   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.579790   29302 main.go:141] libmachine: (multinode-040952) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"}
	I0914 19:05:40.579808   29302 main.go:141] libmachine: (multinode-040952) Reserved static IP address: 192.168.39.14
	I0914 19:05:40.579828   29302 main.go:141] libmachine: (multinode-040952) Waiting for SSH to be available...
	I0914 19:05:40.579844   29302 main.go:141] libmachine: (multinode-040952) DBG | Getting to WaitForSSH function...
	I0914 19:05:40.581922   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582219   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.582248   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582419   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH client type: external
	I0914 19:05:40.582441   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa (-rw-------)
	I0914 19:05:40.582466   29302 main.go:141] libmachine: (multinode-040952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:05:40.582480   29302 main.go:141] libmachine: (multinode-040952) DBG | About to run SSH command:
	I0914 19:05:40.582491   29302 main.go:141] libmachine: (multinode-040952) DBG | exit 0
	I0914 19:05:40.677125   29302 main.go:141] libmachine: (multinode-040952) DBG | SSH cmd err, output: <nil>: 
	I0914 19:05:40.677493   29302 main.go:141] libmachine: (multinode-040952) Calling .GetConfigRaw
	I0914 19:05:40.678081   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.680506   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.680910   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.680945   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.681103   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:40.681284   29302 machine.go:88] provisioning docker machine ...
	I0914 19:05:40.681323   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:40.681566   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681734   29302 buildroot.go:166] provisioning hostname "multinode-040952"
	I0914 19:05:40.681755   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681906   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.683964   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684284   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.684307   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684417   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.684595   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684736   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684890   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.685062   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.685397   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.685412   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952 && echo "multinode-040952" | sudo tee /etc/hostname
	I0914 19:05:40.823251   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952
	
	I0914 19:05:40.823283   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.825791   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826169   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.826206   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826321   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.826510   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826658   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826793   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.826952   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.827274   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.827292   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:05:40.958211   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:05:40.958234   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:05:40.958251   29302 buildroot.go:174] setting up certificates
	I0914 19:05:40.958258   29302 provision.go:83] configureAuth start
	I0914 19:05:40.958270   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.958579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.960950   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961279   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.961310   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961443   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.963552   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.964139   29302 provision.go:138] copyHostCerts
	I0914 19:05:40.966068   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.966080   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966098   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.966106   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:05:40.966111   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966169   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:05:40.966263   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966284   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:05:40.966291   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966314   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:05:40.966407   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966426   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:05:40.966429   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966455   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:05:40.966496   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952 san=[192.168.39.14 192.168.39.14 localhost 127.0.0.1 minikube multinode-040952]
	I0914 19:05:41.093709   29302 provision.go:172] copyRemoteCerts
	I0914 19:05:41.093761   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:05:41.093784   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.096513   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.096889   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.096919   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.097089   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.097303   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.097427   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.097563   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:41.185959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:05:41.186035   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:05:41.209076   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:05:41.209136   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 19:05:41.231360   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:05:41.231432   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 19:05:41.253346   29302 provision.go:86] duration metric: configureAuth took 295.075916ms
	I0914 19:05:41.253364   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:05:41.253583   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:41.253604   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:41.253889   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.256397   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256706   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.256746   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256796   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.256990   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257147   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257300   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.257433   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.257764   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.257781   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:05:41.378606   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:05:41.378636   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:05:41.378779   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:05:41.378811   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.381344   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381631   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.381653   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381854   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.382017   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382151   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382256   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.382401   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.382846   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.382955   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:05:41.524710   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:05:41.524751   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.527598   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528021   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.528050   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528233   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.528403   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528520   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528618   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.528833   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.529147   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.529175   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:05:42.395560   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:05:42.395591   29302 machine.go:91] provisioned docker machine in 1.714293106s
	I0914 19:05:42.395605   29302 start.go:300] post-start starting for "multinode-040952" (driver="kvm2")
	I0914 19:05:42.395617   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:05:42.395637   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.395990   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:05:42.396021   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.398544   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.398997   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.399029   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.399146   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.399327   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.399452   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.399604   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.490598   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:05:42.494659   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:05:42.494675   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:05:42.494679   29302 command_runner.go:130] > ID=buildroot
	I0914 19:05:42.494684   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:05:42.494689   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:05:42.494714   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:05:42.494726   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:05:42.494786   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:05:42.494859   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:05:42.494867   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:05:42.494949   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:05:42.504158   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:42.526832   29302 start.go:303] post-start completed in 131.213234ms
	I0914 19:05:42.526851   29302 fix.go:56] fixHost completed within 21.475246623s
	I0914 19:05:42.526869   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.529527   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.529937   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.529986   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.530137   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.530338   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530471   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530592   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.530728   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:42.531030   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:42.531041   29302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 19:05:42.654398   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718342.602499385
	
	I0914 19:05:42.654428   29302 fix.go:206] guest clock: 1694718342.602499385
	I0914 19:05:42.654435   29302 fix.go:219] Guest: 2023-09-14 19:05:42.602499385 +0000 UTC Remote: 2023-09-14 19:05:42.526854621 +0000 UTC m=+21.595630701 (delta=75.644764ms)
	I0914 19:05:42.654452   29302 fix.go:190] guest clock delta is within tolerance: 75.644764ms
	I0914 19:05:42.654457   29302 start.go:83] releasing machines lock for "multinode-040952", held for 21.60286411s
	I0914 19:05:42.654478   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.654724   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:42.657287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657640   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.657674   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657831   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658283   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658453   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658514   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:05:42.658551   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.658645   29302 ssh_runner.go:195] Run: cat /version.json
	I0914 19:05:42.658666   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.660832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661105   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661257   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661432   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661445   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661474   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661683   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661749   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661825   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661884   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661944   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.661988   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.746664   29302 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "08513a9f809e39764bdb93fc427d760a652ba5ea"}
	I0914 19:05:42.747194   29302 ssh_runner.go:195] Run: systemctl --version
	I0914 19:05:42.773722   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:05:42.773771   29302 command_runner.go:130] > systemd 247 (247)
	I0914 19:05:42.773794   29302 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0914 19:05:42.773870   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:05:42.779663   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:05:42.779691   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:05:42.779753   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:05:42.796458   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:05:42.796494   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:05:42.796506   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:42.796618   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:42.814727   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:05:42.815085   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:05:42.825286   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:05:42.835590   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:05:42.835639   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:05:42.845397   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.855075   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:05:42.864775   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.874625   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:05:42.885032   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:05:42.895300   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:05:42.904333   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:05:42.904406   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:05:42.913443   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.014402   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:05:43.034266   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:43.034341   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:05:43.046339   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:05:43.047277   29302 command_runner.go:130] > [Unit]
	I0914 19:05:43.047292   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:05:43.047300   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:05:43.047311   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:05:43.047321   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:05:43.047330   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:05:43.047340   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:05:43.047347   29302 command_runner.go:130] > [Service]
	I0914 19:05:43.047354   29302 command_runner.go:130] > Type=notify
	I0914 19:05:43.047374   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:05:43.047387   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:05:43.047408   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:05:43.047423   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:05:43.047437   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:05:43.047453   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:05:43.047465   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:05:43.047478   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:05:43.047499   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:05:43.047514   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:05:43.047523   29302 command_runner.go:130] > ExecStart=
	I0914 19:05:43.047549   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:05:43.047562   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:05:43.047574   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:05:43.047589   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:05:43.047600   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:05:43.047609   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:05:43.047619   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:05:43.047632   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:05:43.047647   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:05:43.047657   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:05:43.047668   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:05:43.047682   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:05:43.047692   29302 command_runner.go:130] > Delegate=yes
	I0914 19:05:43.047706   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:05:43.047716   29302 command_runner.go:130] > KillMode=process
	I0914 19:05:43.047721   29302 command_runner.go:130] > [Install]
	I0914 19:05:43.047732   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:05:43.047831   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.059348   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:05:43.076586   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.091070   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.103630   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:05:43.127566   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.140558   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:43.157218   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:05:43.157773   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:05:43.161227   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:05:43.161332   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:05:43.168999   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:05:43.184057   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:05:43.293264   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:05:43.399283   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:05:43.399314   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:05:43.416580   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.527824   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:05:43.992016   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.097079   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:05:44.209025   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.320513   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.428053   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:05:44.444720   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.552820   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 19:05:44.632416   29302 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 19:05:44.632491   29302 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 19:05:44.638252   29302 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0914 19:05:44.638276   29302 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 19:05:44.638286   29302 command_runner.go:130] > Device: 16h/22d	Inode: 831         Links: 1
	I0914 19:05:44.638296   29302 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0914 19:05:44.638305   29302 command_runner.go:130] > Access: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638313   29302 command_runner.go:130] > Modify: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638326   29302 command_runner.go:130] > Change: 2023-09-14 19:05:44.517543091 +0000
	I0914 19:05:44.638332   29302 command_runner.go:130] >  Birth: -
	I0914 19:05:44.638715   29302 start.go:537] Will wait 60s for crictl version
	I0914 19:05:44.638765   29302 ssh_runner.go:195] Run: which crictl
	I0914 19:05:44.642939   29302 command_runner.go:130] > /usr/bin/crictl
	I0914 19:05:44.643309   29302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 19:05:44.681642   29302 command_runner.go:130] > Version:  0.1.0
	I0914 19:05:44.681667   29302 command_runner.go:130] > RuntimeName:  docker
	I0914 19:05:44.681672   29302 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0914 19:05:44.681678   29302 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683160   29302 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683219   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.707204   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.708405   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.736598   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.738686   29302 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 19:05:44.738719   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:44.741297   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741690   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:44.741717   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741894   29302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 19:05:44.745777   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.758482   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:44.758533   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.777353   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.777369   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.777375   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.777380   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.777385   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.777389   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.777395   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.777399   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.777404   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.777409   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.777499   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.777521   29302 docker.go:566] Images already preloaded, skipping extraction
	I0914 19:05:44.777580   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.796442   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.796466   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.796474   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.796487   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.796495   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.796502   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.796510   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.796517   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.796526   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.796533   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.796582   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.796603   29302 cache_images.go:84] Images are preloaded, skipping loading
	I0914 19:05:44.796662   29302 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 19:05:44.826844   29302 command_runner.go:130] > cgroupfs
	I0914 19:05:44.827994   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:44.828012   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:44.828028   29302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 19:05:44.828050   29302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-040952 NodeName:multinode-040952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 19:05:44.828163   29302 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-040952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 19:05:44.828241   29302 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-040952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 19:05:44.828290   29302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 19:05:44.837426   29302 command_runner.go:130] > kubeadm
	I0914 19:05:44.837444   29302 command_runner.go:130] > kubectl
	I0914 19:05:44.837448   29302 command_runner.go:130] > kubelet
	I0914 19:05:44.837478   29302 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 19:05:44.837538   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 19:05:44.845710   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 19:05:44.861289   29302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 19:05:44.876364   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0914 19:05:44.892748   29302 ssh_runner.go:195] Run: grep 192.168.39.14	control-plane.minikube.internal$ /etc/hosts
	I0914 19:05:44.896225   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.908521   29302 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952 for IP: 192.168.39.14
	I0914 19:05:44.908554   29302 certs.go:190] acquiring lock for shared ca certs: {Name:mk8231a646ae91c44c394a9ea29f867fd3f74220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:44.908702   29302 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key
	I0914 19:05:44.908750   29302 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key
	I0914 19:05:44.908825   29302 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key
	I0914 19:05:44.908896   29302 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key.ba52ec04
	I0914 19:05:44.908936   29302 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key
	I0914 19:05:44.908959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 19:05:44.908984   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 19:05:44.909003   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 19:05:44.909021   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 19:05:44.909038   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 19:05:44.909057   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 19:05:44.909069   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 19:05:44.909083   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 19:05:44.909133   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem (1338 bytes)
	W0914 19:05:44.909164   29302 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506_empty.pem, impossibly tiny 0 bytes
	I0914 19:05:44.909175   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 19:05:44.909194   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem (1082 bytes)
	I0914 19:05:44.909221   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem (1123 bytes)
	I0914 19:05:44.909246   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem (1679 bytes)
	I0914 19:05:44.909284   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:44.909309   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem -> /usr/share/ca-certificates/14506.pem
	I0914 19:05:44.909322   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /usr/share/ca-certificates/145062.pem
	I0914 19:05:44.909336   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:44.909846   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 19:05:44.934419   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 19:05:44.957511   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 19:05:44.980559   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 19:05:45.004923   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 19:05:45.028375   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 19:05:45.051817   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 19:05:45.074510   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 19:05:45.098260   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem --> /usr/share/ca-certificates/14506.pem (1338 bytes)
	I0914 19:05:45.121292   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /usr/share/ca-certificates/145062.pem (1708 bytes)
	I0914 19:05:45.144038   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 19:05:45.166026   29302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 19:05:45.181807   29302 ssh_runner.go:195] Run: openssl version
	I0914 19:05:45.187376   29302 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 19:05:45.187428   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14506.pem && ln -fs /usr/share/ca-certificates/14506.pem /etc/ssl/certs/14506.pem"
	I0914 19:05:45.196849   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201160   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201218   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201259   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.206455   29302 command_runner.go:130] > 51391683
	I0914 19:05:45.206657   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14506.pem /etc/ssl/certs/51391683.0"
	I0914 19:05:45.216148   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145062.pem && ln -fs /usr/share/ca-certificates/145062.pem /etc/ssl/certs/145062.pem"
	I0914 19:05:45.225498   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229584   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229749   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229794   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.235209   29302 command_runner.go:130] > 3ec20f2e
	I0914 19:05:45.235283   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145062.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 19:05:45.244557   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 19:05:45.253825   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258352   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258379   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258421   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.263679   29302 command_runner.go:130] > b5213941
	I0914 19:05:45.263724   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 19:05:45.273201   29302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 19:05:45.277387   29302 command_runner.go:130] > ca.crt
	I0914 19:05:45.277404   29302 command_runner.go:130] > ca.key
	I0914 19:05:45.277412   29302 command_runner.go:130] > healthcheck-client.crt
	I0914 19:05:45.277419   29302 command_runner.go:130] > healthcheck-client.key
	I0914 19:05:45.277426   29302 command_runner.go:130] > peer.crt
	I0914 19:05:45.277433   29302 command_runner.go:130] > peer.key
	I0914 19:05:45.277439   29302 command_runner.go:130] > server.crt
	I0914 19:05:45.277446   29302 command_runner.go:130] > server.key
	I0914 19:05:45.277502   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 19:05:45.283251   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.283310   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 19:05:45.289331   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.289405   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 19:05:45.295261   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.295329   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 19:05:45.300680   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.300910   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 19:05:45.306424   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.306599   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 19:05:45.311906   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.312249   29302 kubeadm.go:404] StartCluster: {Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:45.312423   29302 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:45.331162   29302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 19:05:45.340190   29302 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0914 19:05:45.340212   29302 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0914 19:05:45.340221   29302 command_runner.go:130] > /var/lib/minikube/etcd:
	I0914 19:05:45.340226   29302 command_runner.go:130] > member
	I0914 19:05:45.340246   29302 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 19:05:45.340267   29302 kubeadm.go:636] restartCluster start
	I0914 19:05:45.340309   29302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 19:05:45.348452   29302 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.348894   29302 kubeconfig.go:135] verify returned: extract IP: "multinode-040952" does not appear in /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.348998   29302 kubeconfig.go:146] "multinode-040952" context is missing from /home/jenkins/minikube-integration/17217-7285/kubeconfig - will repair!
	I0914 19:05:45.349266   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:45.349662   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.349849   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:05:45.350444   29302 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 19:05:45.350587   29302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 19:05:45.358418   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.358456   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.368403   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.368429   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.368512   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.378454   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.879114   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.879187   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.890404   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.379073   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.379137   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.390460   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.878635   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.878712   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.890234   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.378771   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.378861   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.390972   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.879569   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.879636   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.891015   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.378618   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.378691   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.390037   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.878591   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.878656   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.889682   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.379283   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.379348   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.390298   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.878830   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.878929   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.890070   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.378594   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.378669   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.389750   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.879406   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.879474   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.890792   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.378749   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.378818   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.390362   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.878913   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.878983   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.890684   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.379313   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.390412   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.878965   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.879054   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.890079   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.378659   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.378734   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.389835   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.879480   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.879549   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.890643   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.379316   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.390543   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.879126   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.879190   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.890939   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:55.358694   29302 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 19:05:55.358719   29302 kubeadm.go:1128] stopping kube-system containers ...
	I0914 19:05:55.358774   29302 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:55.380728   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.380744   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.380748   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.380752   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.380756   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.380760   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.380764   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.380768   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.380771   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.380776   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.380780   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.380783   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.380787   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.380790   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.380794   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.380798   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.381007   29302 docker.go:462] Stopping containers: [5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de]
	I0914 19:05:55.381063   29302 ssh_runner.go:195] Run: docker stop 5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de
	I0914 19:05:55.400500   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.400523   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.400528   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.400532   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.400537   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.400545   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.400549   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.400915   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.400933   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.400941   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.400947   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.400953   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.400959   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.400965   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.400970   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.400976   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.402045   29302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 19:05:55.416372   29302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 19:05:55.424910   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 19:05:55.424932   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 19:05:55.424943   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 19:05:55.424952   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.424980   29302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.425021   29302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433299   29302 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433317   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.549527   29302 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 19:05:55.549554   29302 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 19:05:55.549564   29302 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 19:05:55.549574   29302 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 19:05:55.549583   29302 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0914 19:05:55.549599   29302 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0914 19:05:55.549609   29302 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0914 19:05:55.549615   29302 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0914 19:05:55.549624   29302 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0914 19:05:55.549633   29302 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 19:05:55.549640   29302 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 19:05:55.549657   29302 command_runner.go:130] > [certs] Using the existing "sa" key
	I0914 19:05:55.549745   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.598988   29302 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 19:05:55.824313   29302 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 19:05:55.900894   29302 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 19:05:56.276915   29302 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 19:05:56.339928   29302 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 19:05:56.342661   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.405203   29302 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 19:05:56.406633   29302 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 19:05:56.407055   29302 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 19:05:56.524034   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.589683   29302 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 19:05:56.589714   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 19:05:56.593812   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 19:05:56.595032   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 19:05:56.597321   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.696497   29302 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 19:05:56.699815   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:05:56.699898   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:56.713289   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.226345   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.226095   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.226644   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.241067   29302 command_runner.go:130] > 1693
	I0914 19:05:59.241381   29302 api_server.go:72] duration metric: took 2.541565826s to wait for apiserver process to appear ...
	I0914 19:05:59.241402   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:05:59.241422   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.195757   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 19:06:02.195786   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 19:06:02.195796   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.307219   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.307250   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:02.807963   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.814842   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.814876   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.307503   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.315888   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:03.315914   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.807505   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.812721   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:03.812788   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:03.812794   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:03.812802   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:03.812809   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:03.821345   29302 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 19:06:03.821376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:03.821387   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:03.821396   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:03.821402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:03 GMT
	I0914 19:06:03.821410   29302 round_trippers.go:580]     Audit-Id: a2a9e97f-3007-4290-8f99-481d06fc6049
	I0914 19:06:03.821417   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:03.821424   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:03.821433   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:03.821483   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:03.821569   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:03.821589   29302 api_server.go:131] duration metric: took 4.580178903s to wait for apiserver health ...
	I0914 19:06:03.821600   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:06:03.821611   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:06:03.823525   29302 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 19:06:03.825085   29302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 19:06:03.832345   29302 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 19:06:03.832364   29302 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 19:06:03.832370   29302 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 19:06:03.832380   29302 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 19:06:03.832391   29302 command_runner.go:130] > Access: 2023-09-14 19:05:33.824543091 +0000
	I0914 19:06:03.832399   29302 command_runner.go:130] > Modify: 2023-09-12 03:24:25.000000000 +0000
	I0914 19:06:03.832416   29302 command_runner.go:130] > Change: 2023-09-14 19:05:31.874543091 +0000
	I0914 19:06:03.832422   29302 command_runner.go:130] >  Birth: -
	I0914 19:06:03.832466   29302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 19:06:03.832475   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 19:06:03.901488   29302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 19:06:05.205755   29302 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.209188   29302 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.212024   29302 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 19:06:05.225376   29302 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 19:06:05.229823   29302 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.32829993s)
	I0914 19:06:05.229853   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:05.229964   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.229975   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.229982   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.229988   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.234117   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.234139   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.234149   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.234158   29302 round_trippers.go:580]     Audit-Id: 78bdb13b-ed79-4db3-8008-4289bacf78fd
	I0914 19:06:05.234172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.234180   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.234188   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.234195   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.236145   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.239946   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:05.239984   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 19:06:05.239998   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 19:06:05.240008   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 19:06:05.240015   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:05.240026   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:05.240036   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 19:06:05.240054   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 19:06:05.240067   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:05.240073   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:05.240087   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 19:06:05.240101   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 19:06:05.240113   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 19:06:05.240123   29302 system_pods.go:74] duration metric: took 10.263188ms to wait for pod list to return data ...
	I0914 19:06:05.240135   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:05.240193   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:05.240202   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.240212   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.240223   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.245363   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:05.245382   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.245393   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.245401   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.245416   29302 round_trippers.go:580]     Audit-Id: ee9162aa-d308-4bb2-927d-55e7e1011d87
	I0914 19:06:05.245424   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.245435   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.245471   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.245800   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13790 chars]
	I0914 19:06:05.246934   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246965   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.246982   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246996   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247002   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.247012   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247020   29302 node_conditions.go:105] duration metric: took 6.879016ms to run NodePressure ...
	I0914 19:06:05.247043   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:06:05.487041   29302 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 19:06:05.487069   29302 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 19:06:05.487097   29302 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 19:06:05.487490   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0914 19:06:05.487506   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.487516   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.487526   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.491797   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.491820   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.491831   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.491840   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.491848   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.491857   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.491866   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.491875   29302 round_trippers.go:580]     Audit-Id: 9814298e-c189-437e-bfca-dbe0a19423d2
	I0914 19:06:05.492280   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29761 chars]
	I0914 19:06:05.493221   29302 kubeadm.go:787] kubelet initialised
	I0914 19:06:05.493240   29302 kubeadm.go:788] duration metric: took 6.131207ms waiting for restarted kubelet to initialise ...
	I0914 19:06:05.493249   29302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:05.493307   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.493322   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.493334   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.493347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.496849   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:05.496867   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.496876   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.496885   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.496892   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.496901   29302 round_trippers.go:580]     Audit-Id: a7031aa1-24df-4c90-9e52-85f8f96f783c
	I0914 19:06:05.496912   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.496921   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.497873   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.500273   29302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.500335   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:05.500343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.500350   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.500356   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.502411   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.502429   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.502441   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.502449   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.502459   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.502469   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.502478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.502490   29302 round_trippers.go:580]     Audit-Id: f347830a-65d2-4cb4-8423-8b8fc5cc870f
	I0914 19:06:05.502830   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:05.503304   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.503318   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.503328   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.503337   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.505839   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.505853   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.505864   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.505870   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.505875   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.505880   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.505886   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.505894   29302 round_trippers.go:580]     Audit-Id: 71902073-b1b8-4c71-b1d1-af71d48217f1
	I0914 19:06:05.506071   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.506467   29302 pod_ready.go:97] node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506490   29302 pod_ready.go:81] duration metric: took 6.199179ms waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.506501   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506518   29302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.506572   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:05.506583   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.506593   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.506606   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.508379   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.508391   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.508397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.508403   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.508408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.508414   29302 round_trippers.go:580]     Audit-Id: adfe03d4-2812-4ba5-98dd-67afaa529395
	I0914 19:06:05.508419   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.508425   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.508772   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:05.509094   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.509104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.509111   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.509116   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.510985   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.511003   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.511012   29302 round_trippers.go:580]     Audit-Id: 0ee321ba-916a-449f-a719-2eb1a4973cde
	I0914 19:06:05.511019   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.511028   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.511036   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.511044   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.511057   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.511184   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.511454   29302 pod_ready.go:97] node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511470   29302 pod_ready.go:81] duration metric: took 4.945047ms waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.511477   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511489   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.511533   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:05.511540   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.511546   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.511552   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.513172   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.513189   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.513198   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.513206   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.513213   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.513222   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.513230   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.513246   29302 round_trippers.go:580]     Audit-Id: 98886ad5-cb3e-42c1-9236-b75a8e09f5f5
	I0914 19:06:05.513380   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"786","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7850 chars]
	I0914 19:06:05.513760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.513773   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.513780   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.513786   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.515437   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.515456   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.515464   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.515472   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.515481   29302 round_trippers.go:580]     Audit-Id: cc794f2f-df9b-4b8c-8271-303fbb3bda2a
	I0914 19:06:05.515489   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.515502   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.515510   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.515753   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.516001   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516014   29302 pod_ready.go:81] duration metric: took 4.515313ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.516021   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516027   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.516066   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:05.516073   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.516080   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.516086   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.518245   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.518263   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.518277   29302 round_trippers.go:580]     Audit-Id: 6779b7f0-25f9-49d1-be85-87a44d8c3552
	I0914 19:06:05.518286   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.518294   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.518301   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.518314   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.518322   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.518564   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"783","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
	I0914 19:06:05.630264   29302 request.go:629] Waited for 111.324976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630352   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630359   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.630372   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.630382   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.632981   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.633000   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.633006   29302 round_trippers.go:580]     Audit-Id: fd7872d6-edd4-429f-97f2-b2ec1c12de54
	I0914 19:06:05.633012   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.633017   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.633023   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.633028   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.633036   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.633196   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.633629   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633656   29302 pod_ready.go:81] duration metric: took 117.619154ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.633669   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633680   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.830043   29302 request.go:629] Waited for 196.287848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830099   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830103   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.830111   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.830118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.832762   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.832785   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.832794   29302 round_trippers.go:580]     Audit-Id: 3c18be9a-6c71-4025-be83-5fc9c53246a5
	I0914 19:06:05.832801   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.832808   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.832815   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.832822   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.832829   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.833118   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:06.029994   29302 request.go:629] Waited for 196.460915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030079   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030087   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.030099   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.030108   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.032502   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.032520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.032527   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:06.032532   29302 round_trippers.go:580]     Audit-Id: 9d3f52cf-02ab-4abb-92c1-8a7d06224f0e
	I0914 19:06:06.032538   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.032542   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.032547   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.032553   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.032888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:06.033151   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.033165   29302 pod_ready.go:81] duration metric: took 399.477836ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.033173   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.230655   29302 request.go:629] Waited for 197.428191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230712   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230718   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.230725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.230733   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.233365   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.233384   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.233391   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.233397   29302 round_trippers.go:580]     Audit-Id: 53af8c6b-f3d3-4507-ba18-bcb4d7a95376
	I0914 19:06:06.233406   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.233422   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.233431   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.233443   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.233771   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:06.430710   29302 request.go:629] Waited for 196.348215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430762   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.430779   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.430788   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.433906   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:06.433930   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.433942   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.433951   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.433960   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.433969   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.433985   29302 round_trippers.go:580]     Audit-Id: 1280bf02-d81c-4bca-b4e5-275129840268
	I0914 19:06:06.433994   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.434112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"772","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3204 chars]
	I0914 19:06:06.434453   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.434474   29302 pod_ready.go:81] duration metric: took 401.294532ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.434488   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.630939   29302 request.go:629] Waited for 196.385647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631022   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631030   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.631042   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.631051   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.633497   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.633520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.633530   29302 round_trippers.go:580]     Audit-Id: 1dc1f940-384d-494a-8e64-361f1ad205ba
	I0914 19:06:06.633543   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.633552   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.633562   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.633573   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.633584   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.633766   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"788","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5928 chars]
	I0914 19:06:06.830679   29302 request.go:629] Waited for 196.393813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830735   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830740   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.830747   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.830754   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.833354   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.833375   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.833382   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.833387   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.833392   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.833397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.833402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.833407   29302 round_trippers.go:580]     Audit-Id: a24b66f4-fa51-4df4-9bc5-590f310c8108
	I0914 19:06:06.833985   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:06.834382   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834408   29302 pod_ready.go:81] duration metric: took 399.910926ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:06.834420   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834433   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:07.030857   29302 request.go:629] Waited for 196.352242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030940   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.030964   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.030977   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.034225   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.034245   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.034253   29302 round_trippers.go:580]     Audit-Id: 71cfae50-3c69-4f2b-8709-aad710c8dec2
	I0914 19:06:07.034260   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.034268   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.034276   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.034289   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.034298   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:07.034501   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:07.230128   29302 request.go:629] Waited for 195.265564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230211   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230221   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.230229   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.230235   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.233612   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.233631   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.233641   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.233648   29302 round_trippers.go:580]     Audit-Id: c6e16c92-92f1-4f61-b0d2-523db2c467d1
	I0914 19:06:07.233656   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.233665   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.233675   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.233684   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.234058   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.234344   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234368   29302 pod_ready.go:81] duration metric: took 399.923264ms waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:07.234381   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234393   29302 pod_ready.go:38] duration metric: took 1.741133779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:07.234417   29302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 19:06:07.250231   29302 command_runner.go:130] > -16
	I0914 19:06:07.250255   29302 ops.go:34] apiserver oom_adj: -16
	I0914 19:06:07.250263   29302 kubeadm.go:640] restartCluster took 21.909989817s
	I0914 19:06:07.250271   29302 kubeadm.go:406] StartCluster complete in 21.938026901s
	I0914 19:06:07.250290   29302 settings.go:142] acquiring lock: {Name:mkaf2d84e9fceec2029b98353d3d8cae1b369e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.250389   29302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.251059   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.251279   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 19:06:07.251383   29302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 19:06:07.253531   29302 out.go:177] * Enabled addons: 
	I0914 19:06:07.251517   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:07.251534   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.255467   29302 addons.go:502] enable addons completed in 4.093858ms: enabled=[]
	I0914 19:06:07.255670   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:06:07.255997   29302 round_trippers.go:463] GET https://192.168.39.14:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 19:06:07.256010   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.256017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.256025   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.263309   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:07.263329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.263340   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.263348   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.263354   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.263359   29302 round_trippers.go:580]     Content-Length: 291
	I0914 19:06:07.263365   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.263370   29302 round_trippers.go:580]     Audit-Id: 5a75d744-b3cd-40e6-abf4-7b1c8daac075
	I0914 19:06:07.263377   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.263397   29302 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9776e459-4280-488a-924c-4e921bbd9495","resourceVersion":"796","creationTimestamp":"2023-09-14T19:01:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 19:06:07.263508   29302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-040952" context rescaled to 1 replicas
	I0914 19:06:07.263529   29302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 19:06:07.264985   29302 out.go:177] * Verifying Kubernetes components...
	I0914 19:06:07.266359   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:07.389385   29302 command_runner.go:130] > apiVersion: v1
	I0914 19:06:07.389403   29302 command_runner.go:130] > data:
	I0914 19:06:07.389408   29302 command_runner.go:130] >   Corefile: |
	I0914 19:06:07.389411   29302 command_runner.go:130] >     .:53 {
	I0914 19:06:07.389415   29302 command_runner.go:130] >         log
	I0914 19:06:07.389421   29302 command_runner.go:130] >         errors
	I0914 19:06:07.389425   29302 command_runner.go:130] >         health {
	I0914 19:06:07.389429   29302 command_runner.go:130] >            lameduck 5s
	I0914 19:06:07.389433   29302 command_runner.go:130] >         }
	I0914 19:06:07.389437   29302 command_runner.go:130] >         ready
	I0914 19:06:07.389443   29302 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 19:06:07.389447   29302 command_runner.go:130] >            pods insecure
	I0914 19:06:07.389455   29302 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 19:06:07.389473   29302 command_runner.go:130] >            ttl 30
	I0914 19:06:07.389477   29302 command_runner.go:130] >         }
	I0914 19:06:07.389483   29302 command_runner.go:130] >         prometheus :9153
	I0914 19:06:07.389487   29302 command_runner.go:130] >         hosts {
	I0914 19:06:07.389493   29302 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0914 19:06:07.389497   29302 command_runner.go:130] >            fallthrough
	I0914 19:06:07.389501   29302 command_runner.go:130] >         }
	I0914 19:06:07.389508   29302 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 19:06:07.389513   29302 command_runner.go:130] >            max_concurrent 1000
	I0914 19:06:07.389517   29302 command_runner.go:130] >         }
	I0914 19:06:07.389520   29302 command_runner.go:130] >         cache 30
	I0914 19:06:07.389527   29302 command_runner.go:130] >         loop
	I0914 19:06:07.389532   29302 command_runner.go:130] >         reload
	I0914 19:06:07.389541   29302 command_runner.go:130] >         loadbalance
	I0914 19:06:07.389549   29302 command_runner.go:130] >     }
	I0914 19:06:07.389558   29302 command_runner.go:130] > kind: ConfigMap
	I0914 19:06:07.389564   29302 command_runner.go:130] > metadata:
	I0914 19:06:07.389573   29302 command_runner.go:130] >   creationTimestamp: "2023-09-14T19:01:40Z"
	I0914 19:06:07.389585   29302 command_runner.go:130] >   name: coredns
	I0914 19:06:07.389594   29302 command_runner.go:130] >   namespace: kube-system
	I0914 19:06:07.389604   29302 command_runner.go:130] >   resourceVersion: "404"
	I0914 19:06:07.389612   29302 command_runner.go:130] >   uid: 77b79b35-a304-4075-b4c4-6b8a52cfe75c
	I0914 19:06:07.389643   29302 node_ready.go:35] waiting up to 6m0s for node "multinode-040952" to be "Ready" ...
	I0914 19:06:07.389797   29302 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 19:06:07.431021   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.431047   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.431059   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.431069   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.434336   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.434359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.434367   29302 round_trippers.go:580]     Audit-Id: f0218504-ef8b-4fee-a836-3f16c97e6d1d
	I0914 19:06:07.434372   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.434378   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.434383   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.434389   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.434399   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.434888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.630657   29302 request.go:629] Waited for 195.358734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630713   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630720   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.630729   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.630738   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.635002   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:07.635021   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.635027   29302 round_trippers.go:580]     Audit-Id: 0e51cba7-34eb-44c3-be48-8785725a128f
	I0914 19:06:07.635033   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.635038   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.635043   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.635048   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.635053   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.635788   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.136884   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.136903   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.136913   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.136919   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.140137   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:08.140160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.140168   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.140173   29302 round_trippers.go:580]     Audit-Id: 9ec77217-1afd-42b6-aaf7-211e85629e48
	I0914 19:06:08.140179   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.140184   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.140189   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.140194   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.140344   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.637040   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.637079   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.637091   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.637101   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.639714   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:08.639733   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.639744   29302 round_trippers.go:580]     Audit-Id: d47f9fd4-8dec-46b1-8ce9-436c0350c5ca
	I0914 19:06:08.639752   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.639760   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.639769   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.639779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.639788   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.640112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.136649   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.136682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.136690   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.136696   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.139686   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.139704   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.139715   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.139724   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.139733   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.139739   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.139745   29302 round_trippers.go:580]     Audit-Id: ae97ecdc-ac59-4df9-80fb-ab01ff2852ec
	I0914 19:06:09.139750   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.140167   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.636845   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.636866   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.636874   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.636880   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.639508   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.639525   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.639534   29302 round_trippers.go:580]     Audit-Id: 2a2efe7f-361b-45a2-b3cb-a7e9e84043e9
	I0914 19:06:09.639541   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.639549   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.639558   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.639568   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.639578   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.639997   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.640405   29302 node_ready.go:58] node "multinode-040952" has status "Ready":"False"
	I0914 19:06:10.136599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.136624   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.136638   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.136648   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.140273   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.140297   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.140306   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.140313   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.140320   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.140332   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.140340   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.140347   29302 round_trippers.go:580]     Audit-Id: 1af6dc6d-a25f-4a81-86a3-d239224c606e
	I0914 19:06:10.140506   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.140798   29302 node_ready.go:49] node "multinode-040952" has status "Ready":"True"
	I0914 19:06:10.140815   29302 node_ready.go:38] duration metric: took 2.751153874s waiting for node "multinode-040952" to be "Ready" ...
	I0914 19:06:10.140825   29302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:10.140877   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:10.140887   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.140897   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.140907   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.145518   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:10.145535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.145542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.145547   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.145557   29302 round_trippers.go:580]     Audit-Id: d738ec8e-27bb-4210-8329-89e64df5055c
	I0914 19:06:10.145569   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.145579   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.145590   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.146881   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83954 chars]
	I0914 19:06:10.149263   29302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:10.149331   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.149342   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.149353   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.149364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.151221   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.151235   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.151241   29302 round_trippers.go:580]     Audit-Id: 9dce5aa8-17a9-43c4-9448-421e8ef000fe
	I0914 19:06:10.151247   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.151255   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.151264   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.151281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.151288   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.151447   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.151815   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.151829   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.151839   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.151847   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.154035   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.154047   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.154053   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.154058   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.154063   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.154069   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.154075   29302 round_trippers.go:580]     Audit-Id: f451201e-e118-40ff-8809-e06aa3aa8567
	I0914 19:06:10.154084   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.154352   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.154718   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.154731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.154742   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.154752   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.156468   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.156482   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.156491   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.156501   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.156513   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.156524   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.156538   29302 round_trippers.go:580]     Audit-Id: 056aca82-7d21-4539-9de8-316f54300fbb
	I0914 19:06:10.156548   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.156671   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.157120   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.157136   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.157147   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.157162   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.159000   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.159014   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.159023   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.159031   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.159039   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.159049   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.159059   29302 round_trippers.go:580]     Audit-Id: 053f7e6a-3d64-496b-a692-e6d8d7de77dc
	I0914 19:06:10.159074   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.159292   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.660315   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.660343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.660354   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.660364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.662669   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.662688   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.662694   29302 round_trippers.go:580]     Audit-Id: 0b5959bf-4f92-40f5-bff0-64259ee8d0e9
	I0914 19:06:10.662703   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.662711   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.662723   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.662732   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.662744   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.663162   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.663793   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.663810   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.663822   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.663830   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.667280   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.667294   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.667299   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.667304   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.667310   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.667315   29302 round_trippers.go:580]     Audit-Id: adc471fd-2452-48eb-9634-4a15a4129e27
	I0914 19:06:10.667320   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.667325   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.667519   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.160702   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.160731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.160744   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.160753   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.164208   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.164227   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.164234   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.164240   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.164261   29302 round_trippers.go:580]     Audit-Id: 3b81510c-ceb9-488e-bc2e-b21d77b051e2
	I0914 19:06:11.164273   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.164281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.164290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.164555   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.165152   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.165174   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.165187   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.165197   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.168098   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:11.168117   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.168125   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.168133   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.168142   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.168151   29302 round_trippers.go:580]     Audit-Id: 15145bd3-b367-4e99-b3ce-0ae58ef5c733
	I0914 19:06:11.168161   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.168168   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.168530   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.660168   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.660193   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.660205   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.660216   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.663403   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.663424   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.663434   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.663442   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.663449   29302 round_trippers.go:580]     Audit-Id: 3362ce2b-8605-45fd-8885-3eaeb408ef56
	I0914 19:06:11.663457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.663466   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.663476   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.664334   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.664760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.664775   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.664785   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.664795   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.671505   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:11.671522   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.671530   29302 round_trippers.go:580]     Audit-Id: 654293a2-0981-4bec-9543-4726a90c72a3
	I0914 19:06:11.671539   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.671551   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.671560   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.671567   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.671576   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.671723   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.160486   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.160512   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.160524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.160534   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.163604   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:12.163624   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.163634   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.163644   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.163652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.163661   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.163674   29302 round_trippers.go:580]     Audit-Id: 746f41fe-b54a-4602-ba74-6665d07e9fc7
	I0914 19:06:12.163683   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.164257   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.164698   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.164712   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.164721   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.164731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.166907   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.166920   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.166926   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.166934   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.166942   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.166953   29302 round_trippers.go:580]     Audit-Id: e83a6e6d-40cb-4779-8c0a-8f5c050ff286
	I0914 19:06:12.166961   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.166970   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.167376   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.167641   29302 pod_ready.go:102] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"False"
	I0914 19:06:12.660012   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.660034   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.660051   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.660059   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.664300   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:12.664327   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.664338   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.664345   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.664352   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.664360   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.664369   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.664384   29302 round_trippers.go:580]     Audit-Id: 49e3af30-584c-4ef5-942f-2f32701b7bc7
	I0914 19:06:12.665270   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.665705   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.665719   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.665729   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.665738   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.668068   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.668088   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.668097   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.668105   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.668112   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.668120   29302 round_trippers.go:580]     Audit-Id: 28f046b6-f759-4197-80f7-730e48f958ff
	I0914 19:06:12.668128   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.668142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.668260   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.159876   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:13.159904   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.159912   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.159918   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.163892   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:13.163917   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.163928   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.163937   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.163944   29302 round_trippers.go:580]     Audit-Id: 2bafd162-6571-48ef-8c6f-4b72770d2047
	I0914 19:06:13.163952   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.163966   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.163976   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.165138   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0914 19:06:13.165753   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.165771   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.165782   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.165791   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.168088   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.168105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.168112   29302 round_trippers.go:580]     Audit-Id: 767659c2-2c07-4c69-b006-9d19ff6d9f6d
	I0914 19:06:13.168118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.168123   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.168128   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.168135   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.168143   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.168401   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.168681   29302 pod_ready.go:92] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:13.168695   29302 pod_ready.go:81] duration metric: took 3.01941396s waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168703   29302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168801   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.168814   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.168832   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.168846   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.171347   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.171368   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.171375   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.171380   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.171388   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.171397   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.171404   29302 round_trippers.go:580]     Audit-Id: b18d0768-dc31-460c-beed-e50e3a19d6cf
	I0914 19:06:13.171411   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.172044   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.172379   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.172391   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.172399   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.172405   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.175143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.175157   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.175163   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.175168   29302 round_trippers.go:580]     Audit-Id: f6242de5-c366-4c79-aa4f-5b2c5ce0d01e
	I0914 19:06:13.175174   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.175182   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.175190   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.175200   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.176009   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.176284   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.176295   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.176301   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.176307   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.178355   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.178376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.178382   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.178387   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.178393   29302 round_trippers.go:580]     Audit-Id: 8172c157-f43e-42e0-b3a6-8cbd28c89432
	I0914 19:06:13.178401   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.178409   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.178417   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.178832   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.179275   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.179292   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.179302   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.179309   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.180983   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:13.180994   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.180999   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.181004   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.181009   29302 round_trippers.go:580]     Audit-Id: 7d797daa-6bd3-4f35-8046-01886aa5fa4e
	I0914 19:06:13.181014   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.181019   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.181024   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.181219   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.682300   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.682333   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.682342   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.682347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.685143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.685160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.685166   29302 round_trippers.go:580]     Audit-Id: 0910f73d-781a-443b-b8e1-0d453e50ba92
	I0914 19:06:13.685172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.685177   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.685182   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.685187   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.685192   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.685503   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.685920   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.685934   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.685941   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.685947   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.688227   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.688240   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.688246   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.688252   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.688260   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.688268   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.688281   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.688288   29302 round_trippers.go:580]     Audit-Id: 078b7d2a-29bc-4729-9a02-7236c4049ad7
	I0914 19:06:13.688474   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.182102   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.182125   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.182133   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.182140   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.187517   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:14.187544   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.187554   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.187562   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.187569   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.187577   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.187586   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.187594   29302 round_trippers.go:580]     Audit-Id: dd780464-2280-4b93-b398-b175b603d0fe
	I0914 19:06:14.188035   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:14.188554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.188572   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.188583   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.188592   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.190606   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.190620   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.190626   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.190632   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.190637   29302 round_trippers.go:580]     Audit-Id: 104efd51-1025-4755-af8b-f207cfcdb912
	I0914 19:06:14.190642   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.190647   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.190652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.190979   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.682687   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.682711   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.682719   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.682725   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.690728   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:14.690764   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.690775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.690783   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.690791   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.690799   29302 round_trippers.go:580]     Audit-Id: 4dc518a5-6cbd-4561-8ed6-e72b82b2abda
	I0914 19:06:14.690806   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.690814   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.690995   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"887","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
	I0914 19:06:14.691406   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.691420   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.691427   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.691433   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.697743   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:14.697765   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.697774   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.697779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.697784   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.697789   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.697794   29302 round_trippers.go:580]     Audit-Id: 07d3511e-72f3-415a-b985-0c38f9c2dc48
	I0914 19:06:14.697799   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.698080   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.698416   29302 pod_ready.go:92] pod "etcd-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.698432   29302 pod_ready.go:81] duration metric: took 1.529723471s waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698448   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698508   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:14.698517   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.698524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.698530   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.703391   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:14.703406   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.703412   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.703418   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.703423   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.703428   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.703433   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.703439   29302 round_trippers.go:580]     Audit-Id: 0b9ff4df-c192-426d-837d-19a8ddc6d994
	I0914 19:06:14.703718   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"871","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7606 chars]
	I0914 19:06:14.704127   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.704140   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.704147   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.704153   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.706425   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.706444   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.706451   29302 round_trippers.go:580]     Audit-Id: 6eee19bb-2b91-4350-b2ae-7edfbd41930d
	I0914 19:06:14.706457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.706462   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.706467   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.706472   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.706478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.706615   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.706908   29302 pod_ready.go:92] pod "kube-apiserver-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.706921   29302 pod_ready.go:81] duration metric: took 8.465952ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706930   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706986   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:14.706996   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.707007   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.707017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.710085   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:14.710105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.710115   29302 round_trippers.go:580]     Audit-Id: 37a4af49-de22-42c5-8342-96bdccfba829
	I0914 19:06:14.710126   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.710135   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.710143   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.710152   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.710160   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.710726   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"884","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
	I0914 19:06:14.830503   29302 request.go:629] Waited for 119.282235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830558   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.830566   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.830572   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.833064   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.833083   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.833090   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.833095   29302 round_trippers.go:580]     Audit-Id: 7a8584d4-7b4d-4f0c-a673-2711303dfb2c
	I0914 19:06:14.833100   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.833106   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.833110   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.833116   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.833241   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.833562   29302 pod_ready.go:92] pod "kube-controller-manager-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.833577   29302 pod_ready.go:81] duration metric: took 126.641384ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.833587   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.030888   29302 request.go:629] Waited for 197.237265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030946   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.030960   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.030966   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.034339   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.034359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.034366   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.034374   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.034386   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.034394   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.034408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:15.034416   29302 round_trippers.go:580]     Audit-Id: 3c39cfc6-1f06-4726-9679-50e437a9b84d
	I0914 19:06:15.034690   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:15.230480   29302 request.go:629] Waited for 195.333524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230552   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230557   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.230565   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.230574   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.234304   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.234329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.234339   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.234347   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.234359   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.234366   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.234377   29302 round_trippers.go:580]     Audit-Id: 4a324e73-8fa1-482f-bde6-ae80be99f721
	I0914 19:06:15.234386   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.234528   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:15.234774   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.234787   29302 pod_ready.go:81] duration metric: took 401.195035ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.234796   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.430003   29302 request.go:629] Waited for 195.152769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430096   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.430118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.430142   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.433237   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.433271   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.433281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.433290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.433300   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.433309   29302 round_trippers.go:580]     Audit-Id: 92d372f9-e9c9-4d13-8b75-1b3ebd7f2435
	I0914 19:06:15.433321   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.433329   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.433627   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:15.630434   29302 request.go:629] Waited for 196.369841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630534   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630546   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.630557   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.630568   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.633799   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.633824   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.633834   29302 round_trippers.go:580]     Audit-Id: 8ea32575-14e9-412a-ba38-fd00269447f5
	I0914 19:06:15.633844   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.633852   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.633864   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.633873   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.633887   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.634144   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"891","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3084 chars]
	I0914 19:06:15.634401   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.634416   29302 pod_ready.go:81] duration metric: took 399.614214ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.634430   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.830846   29302 request.go:629] Waited for 196.353294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830928   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830933   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.830945   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.830952   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.834221   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.834246   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.834259   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.834267   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.834274   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.834282   29302 round_trippers.go:580]     Audit-Id: 44182567-ce38-4fce-a842-f78410d89ee9
	I0914 19:06:15.834289   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.834298   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.834802   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"798","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I0914 19:06:16.030675   29302 request.go:629] Waited for 195.45562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030731   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030736   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.030743   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.030750   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.034236   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.034260   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.034267   29302 round_trippers.go:580]     Audit-Id: e468604d-7ce9-469a-b812-ed3c9c650d6e
	I0914 19:06:16.034275   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.034281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.034286   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.034291   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.034297   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.034614   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.034941   29302 pod_ready.go:92] pod "kube-proxy-hbsmt" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:16.034956   29302 pod_ready.go:81] duration metric: took 400.519289ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.034964   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.230342   29302 request.go:629] Waited for 195.324407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230449   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230454   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.230462   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.230470   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.233547   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.233564   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.233572   29302 round_trippers.go:580]     Audit-Id: 224fde99-6866-4d6c-81fe-2f97bc0c6734
	I0914 19:06:16.233577   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.233587   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.233592   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.233597   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.233602   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.233823   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.430509   29302 request.go:629] Waited for 196.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430573   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430580   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.430590   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.430600   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.433517   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.433535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.433542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.433559   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.433565   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.433571   29302 round_trippers.go:580]     Audit-Id: 1da1d693-84a7-4480-b07f-7a386588f044
	I0914 19:06:16.433576   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.433581   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.433983   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.630679   29302 request.go:629] Waited for 196.348452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630764   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.630776   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.630783   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.633557   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.633575   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.633582   29302 round_trippers.go:580]     Audit-Id: 2136e32a-148d-4e1d-825d-95e56e17f7f3
	I0914 19:06:16.633589   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.633597   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.633605   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.633612   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.633629   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.634402   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.830072   29302 request.go:629] Waited for 195.313935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830145   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830152   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.830160   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.830168   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.832962   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.832981   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.832988   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.832993   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.832998   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.833006   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.833011   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.833016   29302 round_trippers.go:580]     Audit-Id: 685468aa-007f-4cd0-908f-286f4b9b8738
	I0914 19:06:16.833566   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.334599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.334622   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.334645   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.334652   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.337790   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:17.337810   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.337817   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.337823   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.337828   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.337835   29302 round_trippers.go:580]     Audit-Id: 13885e51-e7a2-41bd-a4e6-27c1810b7f5b
	I0914 19:06:17.337843   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.337850   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.338071   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.338439   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.338455   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.338465   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.338474   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.340824   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.340837   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.340843   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.340848   29302 round_trippers.go:580]     Audit-Id: e2df7950-3f43-43ac-a2ff-9ebcb6aba048
	I0914 19:06:17.340854   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.340862   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.340871   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.340883   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.341277   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.834981   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.835006   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.835015   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.835021   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.837948   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.837973   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.837984   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.837992   29302 round_trippers.go:580]     Audit-Id: bf96bd3c-445d-4267-b684-9a852b7ce0ca
	I0914 19:06:17.838000   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.838008   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.838020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.838027   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.838816   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.839223   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.839236   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.839244   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.839250   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.842020   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.842042   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.842052   29302 round_trippers.go:580]     Audit-Id: 58f6c61f-2107-4d49-bc25-beaf577ebc0b
	I0914 19:06:17.842063   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.842073   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.842084   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.842094   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.842104   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.842191   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.334912   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:18.334936   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.334944   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.334950   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.337727   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:18.337753   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.337763   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.337772   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.337784   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.337793   29302 round_trippers.go:580]     Audit-Id: 91452a7a-9433-48f7-bb48-08448530a97b
	I0914 19:06:18.337804   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.337811   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.338243   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"894","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
	I0914 19:06:18.338636   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:18.338654   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.338664   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.338674   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.342026   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.342059   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.342068   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.342078   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.342085   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.342096   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.342104   29302 round_trippers.go:580]     Audit-Id: a5dad678-33fe-4c2f-a5f5-c10a6380266e
	I0914 19:06:18.342118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.342444   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.342720   29302 pod_ready.go:92] pod "kube-scheduler-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:18.342732   29302 pod_ready.go:81] duration metric: took 2.30776305s waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:18.342741   29302 pod_ready.go:38] duration metric: took 8.201906021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:18.342758   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:06:18.342802   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:06:18.356335   29302 command_runner.go:130] > 1693
	I0914 19:06:18.356824   29302 api_server.go:72] duration metric: took 11.093271286s to wait for apiserver process to appear ...
	I0914 19:06:18.356842   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:06:18.356862   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:18.362653   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:18.362710   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:18.362717   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.362725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.362731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.363650   29302 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 19:06:18.363667   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.363677   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.363686   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.363694   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:18.363711   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.363719   29302 round_trippers.go:580]     Audit-Id: 01d336c4-24b2-4b6e-a634-c932a4f80f56
	I0914 19:06:18.363728   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.363733   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.363748   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:18.363790   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:18.363805   29302 api_server.go:131] duration metric: took 6.957442ms to wait for apiserver health ...
	I0914 19:06:18.363814   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:18.363875   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.363883   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.363889   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.363900   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.367955   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.367989   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.367997   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.368005   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.368013   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.368025   29302 round_trippers.go:580]     Audit-Id: 4a4def47-e1cc-4f97-a173-69327418d154
	I0914 19:06:18.368035   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.368044   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.369884   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.373265   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:18.373287   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.373292   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.373296   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.373299   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.373303   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.373307   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.373312   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.373315   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.373326   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.373335   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.373339   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.373342   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.373347   29302 system_pods.go:74] duration metric: took 9.528517ms to wait for pod list to return data ...
	I0914 19:06:18.373355   29302 default_sa.go:34] waiting for default service account to be created ...
	I0914 19:06:18.430623   29302 request.go:629] Waited for 57.191118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430678   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.430689   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.430695   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.433750   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.433768   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.433775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.433780   29302 round_trippers.go:580]     Content-Length: 261
	I0914 19:06:18.433785   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.433790   29302 round_trippers.go:580]     Audit-Id: f58f454f-de35-4fde-b782-3e31600d0a05
	I0914 19:06:18.433795   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.433803   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.433808   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.433825   29302 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"751abfd7-43aa-4bf5-a223-71659884f01c","resourceVersion":"335","creationTimestamp":"2023-09-14T19:01:53Z"}}]}
	I0914 19:06:18.433967   29302 default_sa.go:45] found service account: "default"
	I0914 19:06:18.433981   29302 default_sa.go:55] duration metric: took 60.621039ms for default service account to be created ...
	I0914 19:06:18.433987   29302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 19:06:18.630408   29302 request.go:629] Waited for 196.359387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630467   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630472   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.630480   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.630486   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.635088   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.635116   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.635126   29302 round_trippers.go:580]     Audit-Id: 40dbf5e6-bdfd-4c25-924c-528834eef0a7
	I0914 19:06:18.635135   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.635142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.635150   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.635159   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.635173   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.636346   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.639989   29302 system_pods.go:86] 12 kube-system pods found
	I0914 19:06:18.640017   29302 system_pods.go:89] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.640024   29302 system_pods.go:89] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.640031   29302 system_pods.go:89] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.640037   29302 system_pods.go:89] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.640043   29302 system_pods.go:89] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.640050   29302 system_pods.go:89] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.640058   29302 system_pods.go:89] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.640064   29302 system_pods.go:89] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.640071   29302 system_pods.go:89] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.640080   29302 system_pods.go:89] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.640088   29302 system_pods.go:89] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.640095   29302 system_pods.go:89] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.640110   29302 system_pods.go:126] duration metric: took 206.118337ms to wait for k8s-apps to be running ...
	I0914 19:06:18.640118   29302 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 19:06:18.640169   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:18.654395   29302 system_svc.go:56] duration metric: took 14.272365ms WaitForService to wait for kubelet.
	I0914 19:06:18.654416   29302 kubeadm.go:581] duration metric: took 11.390867757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 19:06:18.654443   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:18.830833   29302 request.go:629] Waited for 176.33044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830908   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830915   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.830925   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.830934   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.833992   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.834011   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.834020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.834029   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.834038   29302 round_trippers.go:580]     Audit-Id: 78eec727-aee2-400e-8c95-4146a9496a91
	I0914 19:06:18.834047   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.834056   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.834064   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.834284   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0914 19:06:18.835016   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835038   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835048   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835052   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835058   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835067   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835073   29302 node_conditions.go:105] duration metric: took 180.624501ms to run NodePressure ...
	I0914 19:06:18.835093   29302 start.go:228] waiting for startup goroutines ...
	I0914 19:06:18.835102   29302 start.go:233] waiting for cluster config update ...
	I0914 19:06:18.835115   29302 start.go:242] writing updated cluster config ...
	I0914 19:06:18.835683   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:18.835796   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.838910   29302 out.go:177] * Starting worker node multinode-040952-m02 in cluster multinode-040952
	I0914 19:06:18.840147   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:06:18.840163   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:06:18.840249   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:06:18.840261   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:06:18.840334   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.840476   29302 start.go:365] acquiring machines lock for multinode-040952-m02: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:06:18.840512   29302 start.go:369] acquired machines lock for "multinode-040952-m02" in 19.707µs
	I0914 19:06:18.840566   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:06:18.840575   29302 fix.go:54] fixHost starting: m02
	I0914 19:06:18.840830   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:18.840857   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:18.855469   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0914 19:06:18.855890   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:18.856329   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:06:18.856352   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:18.856677   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:18.856891   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:18.857065   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:06:18.858712   29302 fix.go:102] recreateIfNeeded on multinode-040952-m02: state=Stopped err=<nil>
	I0914 19:06:18.858735   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	W0914 19:06:18.858914   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:06:18.861118   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952-m02" ...
	I0914 19:06:18.862649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .Start
	I0914 19:06:18.862832   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring networks are active...
	I0914 19:06:18.863554   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network default is active
	I0914 19:06:18.863887   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network mk-multinode-040952 is active
	I0914 19:06:18.864247   29302 main.go:141] libmachine: (multinode-040952-m02) Getting domain xml...
	I0914 19:06:18.864791   29302 main.go:141] libmachine: (multinode-040952-m02) Creating domain...
	I0914 19:06:20.114677   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting to get IP...
	I0914 19:06:20.115697   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.116116   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.116177   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.116093   29537 retry.go:31] will retry after 292.793167ms: waiting for machine to come up
	I0914 19:06:20.410624   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.411041   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.411062   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.411011   29537 retry.go:31] will retry after 329.185161ms: waiting for machine to come up
	I0914 19:06:20.741486   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.741956   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.741984   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.741922   29537 retry.go:31] will retry after 372.179082ms: waiting for machine to come up
	I0914 19:06:21.115108   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.115492   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.115522   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.115446   29537 retry.go:31] will retry after 552.546331ms: waiting for machine to come up
	I0914 19:06:21.669165   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.669673   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.669702   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.669630   29537 retry.go:31] will retry after 641.98724ms: waiting for machine to come up
	I0914 19:06:22.313770   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:22.314305   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:22.314344   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:22.314258   29537 retry.go:31] will retry after 792.672163ms: waiting for machine to come up
	I0914 19:06:23.108201   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.108628   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.108656   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.108582   29537 retry.go:31] will retry after 820.609535ms: waiting for machine to come up
	I0914 19:06:23.930887   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.931350   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.931383   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.931293   29537 retry.go:31] will retry after 933.919914ms: waiting for machine to come up
	I0914 19:06:24.866306   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:24.866762   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:24.866796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:24.866720   29537 retry.go:31] will retry after 1.175445783s: waiting for machine to come up
	I0914 19:06:26.044181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:26.044639   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:26.044674   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:26.044595   29537 retry.go:31] will retry after 1.659114662s: waiting for machine to come up
	I0914 19:06:27.705347   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:27.705796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:27.705832   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:27.705738   29537 retry.go:31] will retry after 2.838813162s: waiting for machine to come up
	I0914 19:06:30.546592   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:30.547049   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:30.547092   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:30.547042   29537 retry.go:31] will retry after 2.43743272s: waiting for machine to come up
	I0914 19:06:32.987818   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:32.988277   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:32.988300   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:32.988246   29537 retry.go:31] will retry after 4.479558003s: waiting for machine to come up
	I0914 19:06:37.471961   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472352   29302 main.go:141] libmachine: (multinode-040952-m02) Found IP for machine: 192.168.39.16
	I0914 19:06:37.472379   29302 main.go:141] libmachine: (multinode-040952-m02) Reserving static IP address...
	I0914 19:06:37.472392   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472813   29302 main.go:141] libmachine: (multinode-040952-m02) Reserved static IP address: 192.168.39.16
	I0914 19:06:37.472867   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.472882   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting for SSH to be available...
	I0914 19:06:37.472912   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"}
	I0914 19:06:37.472930   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Getting to WaitForSSH function...
	I0914 19:06:37.474853   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475216   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.475243   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475331   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH client type: external
	I0914 19:06:37.475371   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa (-rw-------)
	I0914 19:06:37.475423   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:06:37.475447   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | About to run SSH command:
	I0914 19:06:37.475460   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | exit 0
	I0914 19:06:37.565151   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 19:06:37.565511   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetConfigRaw
	I0914 19:06:37.566140   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.568703   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569097   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.569132   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569351   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:37.569551   29302 machine.go:88] provisioning docker machine ...
	I0914 19:06:37.569568   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:37.569768   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.569927   29302 buildroot.go:166] provisioning hostname "multinode-040952-m02"
	I0914 19:06:37.569954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.570118   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.572245   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572611   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.572640   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572754   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.572896   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573067   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573182   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.573336   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.573757   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.573780   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952-m02 && echo "multinode-040952-m02" | sudo tee /etc/hostname
	I0914 19:06:37.710270   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952-m02
	
	I0914 19:06:37.710294   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.712933   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713287   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.713322   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713438   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.713649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713830   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713965   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.714153   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.714540   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.714569   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:06:37.850271   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:06:37.850302   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:06:37.850321   29302 buildroot.go:174] setting up certificates
	I0914 19:06:37.850331   29302 provision.go:83] configureAuth start
	I0914 19:06:37.850343   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.850630   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.853071   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853477   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.853512   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853665   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.855889   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856295   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.856327   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856394   29302 provision.go:138] copyHostCerts
	I0914 19:06:37.856430   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856463   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:06:37.856473   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856544   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:06:37.856653   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856672   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:06:37.856676   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856699   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:06:37.856741   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856756   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:06:37.856762   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856781   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:06:37.856823   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952-m02 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube multinode-040952-m02]
	I0914 19:06:37.904344   29302 provision.go:172] copyRemoteCerts
	I0914 19:06:37.904397   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:06:37.904417   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.906652   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.906972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.907008   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.907156   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.907312   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.907470   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.907613   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:38.000649   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:06:38.000741   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:06:38.025953   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:06:38.026028   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 19:06:38.048996   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:06:38.049067   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 19:06:38.072478   29302 provision.go:86] duration metric: configureAuth took 222.133675ms
	I0914 19:06:38.072507   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:06:38.072712   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:38.072733   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:38.072954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.075633   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.075959   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.076005   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.076116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.076304   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076482   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076626   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.076778   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.077069   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.077082   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:06:38.199048   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:06:38.199074   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:06:38.199195   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:06:38.199220   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.201601   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.201971   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.201992   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.202160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.202374   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202529   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202642   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.202785   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.203087   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.203150   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:06:38.339052   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:06:38.339081   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.341807   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342226   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.342261   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342430   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.342621   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342798   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.343119   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.343432   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.343461   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:06:39.223778   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:06:39.223805   29302 machine.go:91] provisioned docker machine in 1.654241082s
	I0914 19:06:39.223818   29302 start.go:300] post-start starting for "multinode-040952-m02" (driver="kvm2")
	I0914 19:06:39.223828   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:06:39.223843   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.224176   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:06:39.224211   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.226901   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227247   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.227280   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227544   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.227745   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.227911   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.228053   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.321534   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:06:39.325932   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:06:39.325948   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:06:39.325957   29302 command_runner.go:130] > ID=buildroot
	I0914 19:06:39.325962   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:06:39.325972   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:06:39.326365   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:06:39.326381   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:06:39.326432   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:06:39.326501   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:06:39.326513   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:06:39.326584   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:06:39.336967   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:06:39.360557   29302 start.go:303] post-start completed in 136.725285ms
	I0914 19:06:39.360581   29302 fix.go:56] fixHost completed within 20.520003113s
	I0914 19:06:39.360605   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.362948   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363269   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.363315   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363388   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.363595   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363783   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363936   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.364099   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:39.364460   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:39.364472   29302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 19:06:39.486077   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718399.434257584
	
	I0914 19:06:39.486101   29302 fix.go:206] guest clock: 1694718399.434257584
	I0914 19:06:39.486110   29302 fix.go:219] Guest: 2023-09-14 19:06:39.434257584 +0000 UTC Remote: 2023-09-14 19:06:39.360584834 +0000 UTC m=+78.429360914 (delta=73.67275ms)
	I0914 19:06:39.486128   29302 fix.go:190] guest clock delta is within tolerance: 73.67275ms
	I0914 19:06:39.486135   29302 start.go:83] releasing machines lock for "multinode-040952-m02", held for 20.645613984s
	I0914 19:06:39.486160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.486442   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:39.488972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.489301   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.489321   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.491933   29302 out.go:177] * Found network options:
	I0914 19:06:39.493577   29302 out.go:177]   - NO_PROXY=192.168.39.14
	W0914 19:06:39.495217   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.495254   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495809   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495995   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.496072   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:06:39.496116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	W0914 19:06:39.496205   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.496278   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:06:39.496299   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.498773   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.498969   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499150   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499303   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499318   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499348   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499474   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499542   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499625   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499690   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499747   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.499829   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499990   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.587315   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:06:39.587941   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:06:39.588006   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:06:39.610801   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:06:39.610851   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:06:39.610876   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:06:39.610891   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.610989   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:39.629605   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:06:39.630150   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:06:39.641201   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:06:39.651880   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:06:39.651937   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:06:39.663251   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.674202   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:06:39.685211   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.696908   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:06:39.709126   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:06:39.721014   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:06:39.731728   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:06:39.731788   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:06:39.742220   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:39.854266   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:06:39.871417   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.871488   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:06:39.884609   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:06:39.884650   29302 command_runner.go:130] > [Unit]
	I0914 19:06:39.884657   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:06:39.884663   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:06:39.884669   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:06:39.884677   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:06:39.884682   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:06:39.884689   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:06:39.884693   29302 command_runner.go:130] > [Service]
	I0914 19:06:39.884698   29302 command_runner.go:130] > Type=notify
	I0914 19:06:39.884702   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:06:39.884708   29302 command_runner.go:130] > Environment=NO_PROXY=192.168.39.14
	I0914 19:06:39.884715   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:06:39.884726   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:06:39.884735   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:06:39.884743   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:06:39.884752   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:06:39.884761   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:06:39.884768   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:06:39.884787   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:06:39.884796   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:06:39.884802   29302 command_runner.go:130] > ExecStart=
	I0914 19:06:39.884821   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:06:39.884831   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:06:39.884838   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:06:39.884845   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:06:39.884852   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:06:39.884856   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:06:39.884862   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:06:39.884867   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:06:39.884875   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:06:39.884879   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:06:39.884888   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:06:39.884894   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:06:39.884898   29302 command_runner.go:130] > Delegate=yes
	I0914 19:06:39.884905   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:06:39.884917   29302 command_runner.go:130] > KillMode=process
	I0914 19:06:39.884923   29302 command_runner.go:130] > [Install]
	I0914 19:06:39.884929   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:06:39.885921   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.902340   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:06:39.919241   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.931882   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.944141   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:06:39.980328   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.993054   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:40.010119   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:06:40.010413   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:06:40.014171   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:06:40.014287   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:06:40.024688   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:06:40.042167   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:06:40.160404   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:06:40.272827   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:06:40.272855   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:06:40.289795   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:40.398781   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:06:41.803191   29302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.40437357s)
	I0914 19:06:41.803251   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:41.905435   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:06:42.032291   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:42.160622   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:42.277173   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:06:42.292786   29302 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0914 19:06:42.294889   29302 out.go:177] 
	W0914 19:06:42.296193   29302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0914 19:06:42.296210   29302 out.go:239] * 
	W0914 19:06:42.297001   29302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 19:06:42.298210   29302 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:43 UTC. --
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110721289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110740258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110748982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.560125431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561439001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561948132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.562497172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912088487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912140403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912165447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912176351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c5adb06ad8644fdaa00404169cd62847107a188941b235afcd96bc74a471f36/resolv.conf as [nameserver 192.168.122.1]"
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248847029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248915066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248934609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248946671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746238437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746301425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746320987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746384615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.567374268Z" level=info msg="shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.568816508Z" level=warning msg="cleaning up after shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
	Sep 14 19:06:34 multinode-040952 dockerd[827]: time="2023-09-14T19:06:34.569676835Z" level=info msg="ignoring event" container=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.570344420Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	45c401009e903       8c811b4aec35f                                                                                         32 seconds ago      Running             busybox                   1                   9b65f9b32fcb4
	d8bb85ef502bc       ead0a4a53df89                                                                                         32 seconds ago      Running             coredns                   1                   8c5adb06ad864
	b3f4888d47e37       c7d1297425461                                                                                         37 seconds ago      Running             kindnet-cni               1                   ecedcc81d5040
	c9e2f6411addd       6e38f40d628db                                                                                         39 seconds ago      Exited              storage-provisioner       1                   6517274d37d45
	9057a95faf814       6cdbabde3874e                                                                                         40 seconds ago      Running             kube-proxy                1                   baaaa29d51d71
	1c691ff0fb1dc       b462ce0c8b1ff                                                                                         44 seconds ago      Running             kube-scheduler            1                   a2717cfc7b703
	d2a4b9fbe6163       73deb9a3f7025                                                                                         45 seconds ago      Running             etcd                      1                   8003d9c05224c
	b6362a20e1ba8       5c801295c21d0                                                                                         45 seconds ago      Running             kube-apiserver            1                   d62732c77e111
	7551a7f5f8d28       821b3dfea27be                                                                                         45 seconds ago      Running             kube-controller-manager   1                   d33e8c5c8b80c
	b2201408c190d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   606d676847d38
	5ca168b256eca       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   fb2dbcea99e9f
	1dac2d18ee960       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   2c6b193d8f06a
	bd14e8416f22e       6cdbabde3874e                                                                                         4 minutes ago       Exited              kube-proxy                0                   ac89590af9af7
	e7dd2a8d2bf2a       b462ce0c8b1ff                                                                                         5 minutes ago       Exited              kube-scheduler            0                   3204588282f3d
	79de1cbad023f       73deb9a3f7025                                                                                         5 minutes ago       Exited              etcd                      0                   992d221cf3de6
	bdae306df7741       821b3dfea27be                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   c60a4b7edf2a5
	7ae1932584ffa       5c801295c21d0                                                                                         5 minutes ago       Exited              kube-apiserver            0                   bf69af78fefd5
	
	* 
	* ==> coredns [5ca168b256ec] <==
	* [INFO] 10.244.1.2:34807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001920386s
	[INFO] 10.244.1.2:58373 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223623s
	[INFO] 10.244.1.2:34744 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097963s
	[INFO] 10.244.1.2:42669 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00110869s
	[INFO] 10.244.1.2:49456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084315s
	[INFO] 10.244.1.2:36531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105982s
	[INFO] 10.244.1.2:44052 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073712s
	[INFO] 10.244.0.3:53028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102025s
	[INFO] 10.244.0.3:60397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219163s
	[INFO] 10.244.0.3:58611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119555s
	[INFO] 10.244.0.3:56794 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000389586s
	[INFO] 10.244.1.2:57290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238838s
	[INFO] 10.244.1.2:38598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112648s
	[INFO] 10.244.1.2:36747 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130289s
	[INFO] 10.244.1.2:44678 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130001s
	[INFO] 10.244.0.3:56148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000416563s
	[INFO] 10.244.0.3:48925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015457s
	[INFO] 10.244.0.3:37027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000266436s
	[INFO] 10.244.0.3:58029 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132942s
	[INFO] 10.244.1.2:32850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167159s
	[INFO] 10.244.1.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075407s
	[INFO] 10.244.1.2:33878 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077018s
	[INFO] 10.244.1.2:33144 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119325s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d8bb85ef502b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51360 - 19367 "HINFO IN 781133024460292738.4424492601979386444. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021489339s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-040952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-040952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b
	                    minikube.k8s.io/name=multinode-040952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T19_01_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 19:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-040952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 19:06:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:06:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    multinode-040952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a22e570b53364d97906f6fbadc119046
	  System UUID:                a22e570b-5336-4d97-906f-6fbadc119046
	  Boot ID:                    805cf3f0-f992-49df-b9c1-1c815bc938ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8xj5t                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-5dd5756b68-qrv2r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m50s
	  kube-system                 etcd-multinode-040952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-hvz8s                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-multinode-040952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-controller-manager-multinode-040952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-hbsmt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-multinode-040952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  Starting                 39s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m3s                   kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m3s                   kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m3s                   kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m3s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m50s                  node-controller  Node multinode-040952 event: Registered Node multinode-040952 in Controller
	  Normal  NodeReady                4m38s                  kubelet          Node multinode-040952 status is now: NodeReady
	  Normal  Starting                 47s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  47s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  46s (x8 over 47s)      kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 47s)      kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x7 over 47s)      kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                    node-controller  Node multinode-040952 event: Registered Node multinode-040952 in Controller
	
	
	Name:               multinode-040952-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-040952-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 19:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-040952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 19:04:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-040952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 275cf71437384b3685d193f4ccec91cc
	  System UUID:                275cf714-3738-4b36-85d1-93f4ccec91cc
	  Boot ID:                    9d1451db-6918-461e-9cc4-16724afd48c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-msf7r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kindnet-lrkhw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m47s
	  kube-system                 kube-proxy-gldkh            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x2 over 3m47s)  kubelet          Node multinode-040952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x2 over 3m47s)  kubelet          Node multinode-040952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x2 over 3m47s)  kubelet          Node multinode-040952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m45s                  node-controller  Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node multinode-040952-m02 status is now: NodeReady
	  Normal  RegisteredNode           29s                    node-controller  Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
	
	
	Name:               multinode-040952-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-040952-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 19:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-040952-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 19:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 19:04:49 +0000   Thu, 14 Sep 2023 19:04:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 19:04:49 +0000   Thu, 14 Sep 2023 19:04:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 19:04:49 +0000   Thu, 14 Sep 2023 19:04:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 19:04:49 +0000   Thu, 14 Sep 2023 19:04:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-040952-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4a36b25533f44c6ba83b2c2bb7581e2
	  System UUID:                f4a36b25-533f-44c6-ba83-b2c2bb7581e2
	  Boot ID:                    e31d5883-b5c3-4efd-a9c9-90546837ce6d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pjfsc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-gpl2p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x5 over 2m54s)  kubelet          Node multinode-040952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x5 over 2m54s)  kubelet          Node multinode-040952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x5 over 2m54s)  kubelet          Node multinode-040952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m37s                  kubelet          Node multinode-040952-m03 status is now: NodeReady
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x2 over 2m2s)    kubelet          Node multinode-040952-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x2 over 2m2s)    kubelet          Node multinode-040952-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x2 over 2m2s)    kubelet          Node multinode-040952-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                114s                   kubelet          Node multinode-040952-m03 status is now: NodeReady
	  Normal  RegisteredNode           29s                    node-controller  Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071026] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.320578] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.256122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.741731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615091] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.091320] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.160091] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.277522] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.106300] systemd-fstab-generator[805]: Ignoring "noauto" for root device
	[  +0.125747] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +0.569199] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.109950] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.112895] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +0.113984] systemd-fstab-generator[1025]: Ignoring "noauto" for root device
	[  +0.119773] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[ +11.953340] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +0.384554] kauditd_printk_skb: 67 callbacks suppressed
	
	* 
	* ==> etcd [79de1cbad023] <==
	* {"level":"info","ts":"2023-09-14T19:01:36.01867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T19:01:36.018676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 2"}
	{"level":"info","ts":"2023-09-14T19:01:36.0202Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T19:01:36.020483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:01:36.020568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:01:36.022008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T19:01:36.022275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T19:01:36.022291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T19:01:36.022636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
	{"level":"info","ts":"2023-09-14T19:01:36.022715Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024765Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:03:51.807588Z","caller":"traceutil/trace.go:171","msg":"trace[23883446] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"129.206345ms","start":"2023-09-14T19:03:51.678265Z","end":"2023-09-14T19:03:51.807471Z","steps":["trace[23883446] 'process raft request'  (duration: 129.086639ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T19:04:52.930829Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T19:04:52.930966Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	{"level":"warn","ts":"2023-09-14T19:04:52.931161Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.931257Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.932088Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2023-09-14T19:04:52.952017Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.952093Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T19:04:52.952149Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"599035dfeb7e0476","current-leader-member-id":"599035dfeb7e0476"}
	{"level":"info","ts":"2023-09-14T19:04:52.955652Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:04:52.955754Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:04:52.955763Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	
	* 
	* ==> etcd [d2a4b9fbe616] <==
	* {"level":"info","ts":"2023-09-14T19:05:59.734271Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T19:05:59.734297Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T19:05:59.740699Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T19:05:59.743953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 switched to configuration voters=(6453717501866804342)"}
	{"level":"info","ts":"2023-09-14T19:05:59.746046Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","added-peer-id":"599035dfeb7e0476","added-peer-peer-urls":["https://192.168.39.14:2380"]}
	{"level":"info","ts":"2023-09-14T19:05:59.746423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:05:59.746624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:05:59.744002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:05:59.762875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:05:59.767737Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"599035dfeb7e0476","initial-advertise-peer-urls":["https://192.168.39.14:2380"],"listen-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T19:05:59.767794Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T19:06:00.733425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.733712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.73392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgPreVoteResp from 599035dfeb7e0476 at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.734128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.73421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgVoteResp from 599035dfeb7e0476 at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.734234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.734355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.738829Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T19:06:00.739125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:06:00.739447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T19:06:00.739493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T19:06:00.739514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:06:00.740785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T19:06:00.740794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
	
	* 
	* ==> kernel <==
	*  19:06:43 up 1 min,  0 users,  load average: 1.27, 0.36, 0.12
	Linux multinode-040952 5.10.57 #1 SMP Tue Sep 12 02:34:33 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1dac2d18ee96] <==
	* I0914 19:04:13.417146       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:13.417297       1 main.go:227] handling current node
	I0914 19:04:13.417313       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:13.417322       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:13.417671       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:13.417972       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:23.424504       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:23.425037       1 main.go:227] handling current node
	I0914 19:04:23.425203       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:23.425329       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:23.425757       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:23.425805       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:33.433351       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:33.433474       1 main.go:227] handling current node
	I0914 19:04:33.433513       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:33.434156       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:33.434804       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:33.435075       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:43.456778       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:43.457185       1 main.go:227] handling current node
	I0914 19:04:43.457215       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:43.457226       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:43.457383       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:43.457389       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:04:43.457441       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0} 
	
	* 
	* ==> kindnet [b3f4888d47e3] <==
	* I0914 19:06:08.275205       1 main.go:227] handling current node
	I0914 19:06:08.275662       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:08.275676       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:08.275797       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.16 Flags: [] Table: 0} 
	I0914 19:06:08.275887       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:08.275896       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:08.275949       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0} 
	I0914 19:06:18.290953       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:18.290991       1 main.go:227] handling current node
	I0914 19:06:18.291009       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:18.291014       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:18.291123       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:18.291128       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:28.307114       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:28.307170       1 main.go:227] handling current node
	I0914 19:06:28.307193       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:28.307199       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:28.307346       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:28.307381       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:38.313370       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:38.313758       1 main.go:227] handling current node
	I0914 19:06:38.314072       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:38.314290       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:38.314714       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:38.314906       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [7ae1932584ff] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.925127       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.938224       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.943236       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [b6362a20e1ba] <==
	* I0914 19:06:02.103379       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0914 19:06:02.103893       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 19:06:02.103947       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 19:06:02.227119       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 19:06:02.271711       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 19:06:02.304807       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 19:06:02.304872       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 19:06:02.305849       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 19:06:02.305890       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 19:06:02.306061       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 19:06:02.331297       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 19:06:02.331358       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 19:06:02.335150       1 aggregator.go:166] initial CRD sync complete...
	I0914 19:06:02.335193       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 19:06:02.335200       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 19:06:02.335206       1 cache.go:39] Caches are synced for autoregister controller
	I0914 19:06:03.100463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 19:06:03.368706       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.14]
	I0914 19:06:03.370054       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 19:06:03.376360       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 19:06:05.169364       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 19:06:05.329658       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 19:06:05.341332       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 19:06:05.419400       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 19:06:05.426410       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [7551a7f5f8d2] <==
	* I0914 19:06:14.661322       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0914 19:06:14.661435       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 19:06:14.661442       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 19:06:14.664911       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 19:06:14.667650       1 shared_informer.go:318] Caches are synced for job
	I0914 19:06:14.678438       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 19:06:14.684625       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 19:06:14.710898       1 shared_informer.go:318] Caches are synced for attach detach
	I0914 19:06:14.717414       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 19:06:14.743422       1 shared_informer.go:318] Caches are synced for taint
	I0914 19:06:14.743617       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0914 19:06:14.744935       1 event.go:307] "Event occurred" object="multinode-040952" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952 event: Registered Node multinode-040952 in Controller"
	I0914 19:06:14.744976       1 event.go:307] "Event occurred" object="multinode-040952-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller"
	I0914 19:06:14.744985       1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
	I0914 19:06:14.747755       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0914 19:06:14.747973       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 19:06:14.748234       1 taint_manager.go:211] "Sending events to api server"
	I0914 19:06:14.755944       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 19:06:14.758787       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952"
	I0914 19:06:14.759112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m02"
	I0914 19:06:14.759307       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
	I0914 19:06:14.761326       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0914 19:06:15.192335       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 19:06:15.196730       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 19:06:15.196764       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [bdae306df774] <==
	* I0914 19:03:11.800269       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0914 19:03:11.822032       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-msf7r"
	I0914 19:03:11.832933       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8xj5t"
	I0914 19:03:11.858800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.587243ms"
	I0914 19:03:11.881601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.674253ms"
	I0914 19:03:11.911272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.257061ms"
	I0914 19:03:11.911865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="129.703µs"
	I0914 19:03:13.323606       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-msf7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-msf7r"
	I0914 19:03:14.759110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.215323ms"
	I0914 19:03:14.759979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.128µs"
	I0914 19:03:15.674480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.700191ms"
	I0914 19:03:15.674657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.358µs"
	I0914 19:03:50.546206       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:03:50.547815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
	I0914 19:03:50.566383       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gpl2p"
	I0914 19:03:50.573363       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pjfsc"
	I0914 19:03:50.579177       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.2.0/24"]
	I0914 19:03:53.329628       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
	I0914 19:03:53.330341       1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
	I0914 19:04:06.424965       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:40.617462       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:41.474271       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:41.476212       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
	I0914 19:04:41.488035       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.3.0/24"]
	I0914 19:04:49.789872       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	
	* 
	* ==> kube-proxy [9057a95faf81] <==
	* I0914 19:06:04.144375       1 server_others.go:69] "Using iptables proxy"
	I0914 19:06:04.170724       1 node.go:141] Successfully retrieved node IP: 192.168.39.14
	I0914 19:06:04.450059       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 19:06:04.450082       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 19:06:04.458361       1 server_others.go:152] "Using iptables Proxier"
	I0914 19:06:04.459621       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 19:06:04.460661       1 server.go:846] "Version info" version="v1.28.1"
	I0914 19:06:04.461096       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:06:04.466061       1 config.go:188] "Starting service config controller"
	I0914 19:06:04.466932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 19:06:04.467389       1 config.go:97] "Starting endpoint slice config controller"
	I0914 19:06:04.467710       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 19:06:04.469390       1 config.go:315] "Starting node config controller"
	I0914 19:06:04.469898       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 19:06:04.568257       1 shared_informer.go:318] Caches are synced for service config
	I0914 19:06:04.568320       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 19:06:04.571747       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [bd14e8416f22] <==
	* I0914 19:01:54.607139       1 server_others.go:69] "Using iptables proxy"
	I0914 19:01:54.619412       1 node.go:141] Successfully retrieved node IP: 192.168.39.14
	I0914 19:01:54.687340       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 19:01:54.687387       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 19:01:54.690390       1 server_others.go:152] "Using iptables Proxier"
	I0914 19:01:54.690676       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 19:01:54.690863       1 server.go:846] "Version info" version="v1.28.1"
	I0914 19:01:54.690874       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:01:54.691425       1 config.go:188] "Starting service config controller"
	I0914 19:01:54.691480       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 19:01:54.691505       1 config.go:97] "Starting endpoint slice config controller"
	I0914 19:01:54.691634       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 19:01:54.693270       1 config.go:315] "Starting node config controller"
	I0914 19:01:54.693313       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 19:01:54.792627       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 19:01:54.792662       1 shared_informer.go:318] Caches are synced for service config
	I0914 19:01:54.793421       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1c691ff0fb1d] <==
	* I0914 19:06:00.284533       1 serving.go:348] Generated self-signed cert in-memory
	W0914 19:06:02.177631       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 19:06:02.177821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:06:02.178051       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 19:06:02.178277       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 19:06:02.270392       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 19:06:02.270853       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:06:02.286074       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 19:06:02.290157       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 19:06:02.290663       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 19:06:02.290679       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 19:06:02.393949       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e7dd2a8d2bf2] <==
	* E0914 19:01:37.477320       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:01:37.477458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 19:01:37.477507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 19:01:38.288201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 19:01:38.288230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 19:01:38.315971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 19:01:38.315998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 19:01:38.401116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 19:01:38.401259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 19:01:38.486649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 19:01:38.486726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 19:01:38.559583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 19:01:38.559638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 19:01:38.654661       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 19:01:38.654763       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:01:38.746863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 19:01:38.747118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 19:01:38.748736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 19:01:38.749082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 19:01:38.759272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 19:01:38.759300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0914 19:01:40.363415       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 19:04:52.977252       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 19:04:52.977363       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 19:04:52.977770       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:44 UTC. --
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334153    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334219    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.334203478 +0000 UTC m=+7.832049981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435647    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435679    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435727    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.435713596 +0000 UTC m=+7.933560098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343855    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343919    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.343905485 +0000 UTC m=+9.841751999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444793    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444924    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.445066    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.445023628 +0000 UTC m=+9.942870143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836832    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836934    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360509    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360711    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.360695397 +0000 UTC m=+13.858541911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461710    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461760    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461858    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.461842696 +0000 UTC m=+13.959689202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: I0914 19:06:06.956674    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecedcc81d5040d88abcafe724d7ff2140b999b458d0e93f11b00ad6783066a7b"
	Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.069490    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
	Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.077183    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
	Sep 14 19:06:09 multinode-040952 kubelet[1290]: I0914 19:06:09.602526    1290 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 14 19:06:11 multinode-040952 kubelet[1290]: I0914 19:06:11.624814    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.964746    1290 scope.go:117] "RemoveContainer" containerID="bda018c9a602e0ece971914d9996bb4c59847a4417bdfa7d7cfee531dbe1b929"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.965104    1290 scope.go:117] "RemoveContainer" containerID="c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: E0914 19:06:34.965323    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8f25fe5b-237f-415a-baca-e4342106bb4d)\"" pod="kube-system/storage-provisioner" podUID="8f25fe5b-237f-415a-baca-e4342106bb4d"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-040952 -n multinode-040952
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-040952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (112.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 node delete m03
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr: exit status 2 (394.396063ms)

                                                
                                                
-- stdout --
	multinode-040952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040952-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:06:45.296231   29841 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:06:45.296472   29841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:06:45.296481   29841 out.go:309] Setting ErrFile to fd 2...
	I0914 19:06:45.296486   29841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:06:45.296663   29841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:06:45.296810   29841 out.go:303] Setting JSON to false
	I0914 19:06:45.296835   29841 mustload.go:65] Loading cluster: multinode-040952
	I0914 19:06:45.296895   29841 notify.go:220] Checking for updates...
	I0914 19:06:45.297365   29841 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:45.297385   29841 status.go:255] checking status of multinode-040952 ...
	I0914 19:06:45.297854   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.297898   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.312339   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0914 19:06:45.312887   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.313493   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.313514   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.313847   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.314041   29841 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:06:45.315779   29841 status.go:330] multinode-040952 host status = "Running" (err=<nil>)
	I0914 19:06:45.315794   29841 host.go:66] Checking if "multinode-040952" exists ...
	I0914 19:06:45.316069   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.316092   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.329992   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43351
	I0914 19:06:45.330313   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.330685   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.330708   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.330999   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.331175   29841 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:06:45.333971   29841 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:06:45.334413   29841 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:05:33 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:06:45.334446   29841 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:06:45.334616   29841 host.go:66] Checking if "multinode-040952" exists ...
	I0914 19:06:45.334889   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.334932   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.348609   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0914 19:06:45.348941   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.349323   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.349340   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.349644   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.349824   29841 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:06:45.349990   29841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:06:45.350015   29841 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:06:45.352610   29841 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:06:45.353101   29841 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:05:33 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:06:45.353136   29841 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:06:45.353298   29841 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:06:45.353506   29841 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:06:45.353655   29841 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:06:45.353793   29841 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:06:45.441311   29841 ssh_runner.go:195] Run: systemctl --version
	I0914 19:06:45.447829   29841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:45.460175   29841 kubeconfig.go:92] found "multinode-040952" server: "https://192.168.39.14:8443"
	I0914 19:06:45.460196   29841 api_server.go:166] Checking apiserver status ...
	I0914 19:06:45.460222   29841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:06:45.471738   29841 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1693/cgroup
	I0914 19:06:45.481480   29841 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod8756931ebb3ad632d1fa90a79d546b12/b6362a20e1ba85c08c239bfaf2b8874429986fbe62ea0b130c56aa8d6fcfc94f"
	I0914 19:06:45.481529   29841 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8756931ebb3ad632d1fa90a79d546b12/b6362a20e1ba85c08c239bfaf2b8874429986fbe62ea0b130c56aa8d6fcfc94f/freezer.state
	I0914 19:06:45.491056   29841 api_server.go:204] freezer state: "THAWED"
	I0914 19:06:45.491074   29841 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:45.497304   29841 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:45.497322   29841 status.go:421] multinode-040952 apiserver status = Running (err=<nil>)
	I0914 19:06:45.497331   29841 status.go:257] multinode-040952 status: &{Name:multinode-040952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:06:45.497348   29841 status.go:255] checking status of multinode-040952-m02 ...
	I0914 19:06:45.497646   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.497683   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.511831   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0914 19:06:45.512227   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.512697   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.512721   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.513008   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.513133   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:06:45.514710   29841 status.go:330] multinode-040952-m02 host status = "Running" (err=<nil>)
	I0914 19:06:45.514725   29841 host.go:66] Checking if "multinode-040952-m02" exists ...
	I0914 19:06:45.515002   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.515023   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.528854   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0914 19:06:45.529188   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.529625   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.529645   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.529923   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.530113   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:45.532737   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:45.533091   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:45.533122   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:45.533228   29841 host.go:66] Checking if "multinode-040952-m02" exists ...
	I0914 19:06:45.533618   29841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:45.533668   29841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:45.547102   29841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0914 19:06:45.547442   29841 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:45.547867   29841 main.go:141] libmachine: Using API Version  1
	I0914 19:06:45.547886   29841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:45.548170   29841 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:45.548328   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:45.548496   29841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:06:45.548515   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:45.550983   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:45.551381   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:45.551436   29841 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:45.551566   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:45.551731   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:45.551875   29841 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:45.551985   29841 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:45.641275   29841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:45.653704   29841 status.go:257] multinode-040952-m02 status: &{Name:multinode-040952-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-040952 -n multinode-040952
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 logs -n 25: (1.17857542s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952:/home/docker/cp-test_multinode-040952-m02_multinode-040952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952 sudo cat                                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m02_multinode-040952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03:/home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952-m03 sudo cat                                   | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp testdata/cp-test.txt                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952:/home/docker/cp-test_multinode-040952-m03_multinode-040952.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952 sudo cat                                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m03_multinode-040952.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt                       | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m02:/home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n                                                                 | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | multinode-040952-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-040952 ssh -n multinode-040952-m02 sudo cat                                   | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-040952 node stop m03                                                          | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	| node    | multinode-040952 node start                                                             | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:04 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-040952                                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC |                     |
	| stop    | -p multinode-040952                                                                     | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:04 UTC | 14 Sep 23 19:05 UTC |
	| start   | -p multinode-040952                                                                     | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:05 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-040952                                                                | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:06 UTC |                     |
	| node    | multinode-040952 node delete                                                            | multinode-040952 | jenkins | v1.31.2 | 14 Sep 23 19:06 UTC | 14 Sep 23 19:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 19:05:20
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 19:05:20.962804   29302 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:05:20.963060   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963070   29302 out.go:309] Setting ErrFile to fd 2...
	I0914 19:05:20.963075   29302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:05:20.963243   29302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:05:20.963781   29302 out.go:303] Setting JSON to false
	I0914 19:05:20.964724   29302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2870,"bootTime":1694715451,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 19:05:20.964780   29302 start.go:138] virtualization: kvm guest
	I0914 19:05:20.967109   29302 out.go:177] * [multinode-040952] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 19:05:20.968562   29302 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 19:05:20.969984   29302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 19:05:20.968648   29302 notify.go:220] Checking for updates...
	I0914 19:05:20.972859   29302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:20.974265   29302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 19:05:20.975509   29302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 19:05:20.976805   29302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 19:05:20.978678   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:20.978756   29302 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 19:05:20.979122   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:20.979158   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:20.994127   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0914 19:05:20.994544   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:20.994996   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:20.995035   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:20.995534   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:20.995713   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.030837   29302 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 19:05:21.032222   29302 start.go:298] selected driver: kvm2
	I0914 19:05:21.032235   29302 start.go:902] validating driver "kvm2" against &{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:21.032388   29302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 19:05:21.032684   29302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.032744   29302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 19:05:21.046926   29302 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 19:05:21.047549   29302 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 19:05:21.047615   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:21.047628   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:21.047635   29302 start_flags.go:321] config:
	{Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0914 19:05:21.047846   29302 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 19:05:21.049820   29302 out.go:177] * Starting control plane node multinode-040952 in cluster multinode-040952
	I0914 19:05:21.051078   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:21.051117   29302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0914 19:05:21.051132   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:05:21.051200   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:05:21.051211   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:05:21.051357   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:21.051546   29302 start.go:365] acquiring machines lock for multinode-040952: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:05:21.051585   29302 start.go:369] acquired machines lock for "multinode-040952" in 22.658µs
	I0914 19:05:21.051598   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:05:21.051604   29302 fix.go:54] fixHost starting: 
	I0914 19:05:21.051851   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:05:21.051877   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:05:21.065211   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
	I0914 19:05:21.065673   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:05:21.066137   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:05:21.066161   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:05:21.066462   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:05:21.066623   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:21.066770   29302 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:05:21.068116   29302 fix.go:102] recreateIfNeeded on multinode-040952: state=Stopped err=<nil>
	I0914 19:05:21.068149   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	W0914 19:05:21.068327   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:05:21.070143   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952" ...
	I0914 19:05:21.071437   29302 main.go:141] libmachine: (multinode-040952) Calling .Start
	I0914 19:05:21.071593   29302 main.go:141] libmachine: (multinode-040952) Ensuring networks are active...
	I0914 19:05:21.072249   29302 main.go:141] libmachine: (multinode-040952) Ensuring network default is active
	I0914 19:05:21.072599   29302 main.go:141] libmachine: (multinode-040952) Ensuring network mk-multinode-040952 is active
	I0914 19:05:21.072924   29302 main.go:141] libmachine: (multinode-040952) Getting domain xml...
	I0914 19:05:21.073627   29302 main.go:141] libmachine: (multinode-040952) Creating domain...
	I0914 19:05:22.290792   29302 main.go:141] libmachine: (multinode-040952) Waiting to get IP...
	I0914 19:05:22.291697   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.292055   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.292102   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.292035   29331 retry.go:31] will retry after 308.296154ms: waiting for machine to come up
	I0914 19:05:22.601636   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.602066   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.602099   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.602024   29331 retry.go:31] will retry after 317.837388ms: waiting for machine to come up
	I0914 19:05:22.921508   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:22.921867   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:22.921901   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:22.921847   29331 retry.go:31] will retry after 471.086167ms: waiting for machine to come up
	I0914 19:05:23.394404   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.394838   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.394871   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.394792   29331 retry.go:31] will retry after 484.306086ms: waiting for machine to come up
	I0914 19:05:23.880204   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:23.880564   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:23.880583   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:23.880535   29331 retry.go:31] will retry after 618.601122ms: waiting for machine to come up
	I0914 19:05:24.500881   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:24.501312   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:24.501338   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:24.501260   29331 retry.go:31] will retry after 909.340951ms: waiting for machine to come up
	I0914 19:05:25.412225   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:25.412602   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:25.412643   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:25.412551   29331 retry.go:31] will retry after 1.126879825s: waiting for machine to come up
	I0914 19:05:26.540657   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:26.541060   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:26.541092   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:26.541009   29331 retry.go:31] will retry after 1.102019824s: waiting for machine to come up
	I0914 19:05:27.644123   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:27.644509   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:27.644533   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:27.644464   29331 retry.go:31] will retry after 1.486754446s: waiting for machine to come up
	I0914 19:05:29.133039   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:29.133510   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:29.133535   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:29.133470   29331 retry.go:31] will retry after 2.117464983s: waiting for machine to come up
	I0914 19:05:31.252796   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:31.253157   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:31.253189   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:31.253114   29331 retry.go:31] will retry after 2.386416431s: waiting for machine to come up
	I0914 19:05:33.642490   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:33.643052   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:33.643079   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:33.643013   29331 retry.go:31] will retry after 2.611013914s: waiting for machine to come up
	I0914 19:05:36.255832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:36.256237   29302 main.go:141] libmachine: (multinode-040952) DBG | unable to find current IP address of domain multinode-040952 in network mk-multinode-040952
	I0914 19:05:36.256259   29302 main.go:141] libmachine: (multinode-040952) DBG | I0914 19:05:36.256195   29331 retry.go:31] will retry after 4.317080822s: waiting for machine to come up
	I0914 19:05:40.578744   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579178   29302 main.go:141] libmachine: (multinode-040952) Found IP for machine: 192.168.39.14
	I0914 19:05:40.579199   29302 main.go:141] libmachine: (multinode-040952) Reserving static IP address...
	I0914 19:05:40.579208   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has current primary IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.579755   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.579790   29302 main.go:141] libmachine: (multinode-040952) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952", mac: "52:54:00:0b:8d:f2", ip: "192.168.39.14"}
	I0914 19:05:40.579808   29302 main.go:141] libmachine: (multinode-040952) Reserved static IP address: 192.168.39.14
	I0914 19:05:40.579828   29302 main.go:141] libmachine: (multinode-040952) Waiting for SSH to be available...
	I0914 19:05:40.579844   29302 main.go:141] libmachine: (multinode-040952) DBG | Getting to WaitForSSH function...
	I0914 19:05:40.581922   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582219   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.582248   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.582419   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH client type: external
	I0914 19:05:40.582441   29302 main.go:141] libmachine: (multinode-040952) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa (-rw-------)
	I0914 19:05:40.582466   29302 main.go:141] libmachine: (multinode-040952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:05:40.582480   29302 main.go:141] libmachine: (multinode-040952) DBG | About to run SSH command:
	I0914 19:05:40.582491   29302 main.go:141] libmachine: (multinode-040952) DBG | exit 0
	I0914 19:05:40.677125   29302 main.go:141] libmachine: (multinode-040952) DBG | SSH cmd err, output: <nil>: 
	I0914 19:05:40.677493   29302 main.go:141] libmachine: (multinode-040952) Calling .GetConfigRaw
	I0914 19:05:40.678081   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.680506   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.680910   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.680945   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.681103   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:05:40.681284   29302 machine.go:88] provisioning docker machine ...
	I0914 19:05:40.681323   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:40.681566   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681734   29302 buildroot.go:166] provisioning hostname "multinode-040952"
	I0914 19:05:40.681755   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.681906   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.683964   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684284   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.684307   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.684417   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.684595   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684736   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.684890   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.685062   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.685397   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.685412   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952 && echo "multinode-040952" | sudo tee /etc/hostname
	I0914 19:05:40.823251   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952
	
	I0914 19:05:40.823283   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.825791   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826169   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.826206   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.826321   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:40.826510   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826658   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:40.826793   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:40.826952   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:40.827274   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:40.827292   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:05:40.958211   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:05:40.958234   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:05:40.958251   29302 buildroot.go:174] setting up certificates
	I0914 19:05:40.958258   29302 provision.go:83] configureAuth start
	I0914 19:05:40.958270   29302 main.go:141] libmachine: (multinode-040952) Calling .GetMachineName
	I0914 19:05:40.958579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:40.960950   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961279   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.961310   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.961443   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:40.963552   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.964139   29302 provision.go:138] copyHostCerts
	I0914 19:05:40.966068   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:40.966080   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966098   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:40.966106   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:05:40.966111   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:05:40.966169   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:05:40.966263   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966284   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:05:40.966291   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:05:40.966314   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:05:40.966407   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966426   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:05:40.966429   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:05:40.966455   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:05:40.966496   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952 san=[192.168.39.14 192.168.39.14 localhost 127.0.0.1 minikube multinode-040952]
	I0914 19:05:41.093709   29302 provision.go:172] copyRemoteCerts
	I0914 19:05:41.093761   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:05:41.093784   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.096513   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.096889   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.096919   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.097089   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.097303   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.097427   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.097563   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:41.185959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:05:41.186035   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:05:41.209076   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:05:41.209136   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0914 19:05:41.231360   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:05:41.231432   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 19:05:41.253346   29302 provision.go:86] duration metric: configureAuth took 295.075916ms
	I0914 19:05:41.253364   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:05:41.253583   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:05:41.253604   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:41.253889   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.256397   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256706   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.256746   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.256796   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.256990   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257147   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.257300   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.257433   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.257764   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.257781   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:05:41.378606   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:05:41.378636   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:05:41.378779   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:05:41.378811   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.381344   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381631   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.381653   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.381854   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.382017   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382151   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.382256   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.382401   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.382846   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.382955   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:05:41.524710   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:05:41.524751   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:41.527598   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528021   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:41.528050   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:41.528233   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:41.528403   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528520   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:41.528618   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:41.528833   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:41.529147   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:41.529175   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:05:42.395560   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:05:42.395591   29302 machine.go:91] provisioned docker machine in 1.714293106s
	I0914 19:05:42.395605   29302 start.go:300] post-start starting for "multinode-040952" (driver="kvm2")
	I0914 19:05:42.395617   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:05:42.395637   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.395990   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:05:42.396021   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.398544   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.398997   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.399029   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.399146   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.399327   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.399452   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.399604   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.490598   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:05:42.494659   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:05:42.494675   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:05:42.494679   29302 command_runner.go:130] > ID=buildroot
	I0914 19:05:42.494684   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:05:42.494689   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:05:42.494714   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:05:42.494726   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:05:42.494786   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:05:42.494859   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:05:42.494867   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:05:42.494949   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:05:42.504158   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:42.526832   29302 start.go:303] post-start completed in 131.213234ms
	I0914 19:05:42.526851   29302 fix.go:56] fixHost completed within 21.475246623s
	I0914 19:05:42.526869   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.529527   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.529937   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.529986   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.530137   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.530338   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530471   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.530592   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.530728   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:05:42.531030   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.14 22 <nil> <nil>}
	I0914 19:05:42.531041   29302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 19:05:42.654398   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718342.602499385
	
	I0914 19:05:42.654428   29302 fix.go:206] guest clock: 1694718342.602499385
	I0914 19:05:42.654435   29302 fix.go:219] Guest: 2023-09-14 19:05:42.602499385 +0000 UTC Remote: 2023-09-14 19:05:42.526854621 +0000 UTC m=+21.595630701 (delta=75.644764ms)
	I0914 19:05:42.654452   29302 fix.go:190] guest clock delta is within tolerance: 75.644764ms
	I0914 19:05:42.654457   29302 start.go:83] releasing machines lock for "multinode-040952", held for 21.60286411s
	I0914 19:05:42.654478   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.654724   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:42.657287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657640   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.657674   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.657831   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658283   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658453   29302 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:05:42.658514   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:05:42.658551   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.658645   29302 ssh_runner.go:195] Run: cat /version.json
	I0914 19:05:42.658666   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:05:42.660832   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661105   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661257   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661287   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661432   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:42.661445   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661474   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:42.661579   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:05:42.661683   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661749   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:05:42.661825   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661884   29302 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:05:42.661944   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.661988   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:05:42.746664   29302 command_runner.go:130] > {"iso_version": "v1.31.0-1694468241-17194", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "08513a9f809e39764bdb93fc427d760a652ba5ea"}
	I0914 19:05:42.747194   29302 ssh_runner.go:195] Run: systemctl --version
	I0914 19:05:42.773722   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:05:42.773771   29302 command_runner.go:130] > systemd 247 (247)
	I0914 19:05:42.773794   29302 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0914 19:05:42.773870   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:05:42.779663   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:05:42.779691   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:05:42.779753   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:05:42.796458   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:05:42.796494   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:05:42.796506   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:42.796618   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:42.814727   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:05:42.815085   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:05:42.825286   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:05:42.835590   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:05:42.835639   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:05:42.845397   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.855075   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:05:42.864775   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:05:42.874625   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:05:42.885032   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:05:42.895300   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:05:42.904333   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:05:42.904406   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:05:42.913443   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.014402   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:05:43.034266   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:05:43.034341   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:05:43.046339   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:05:43.047277   29302 command_runner.go:130] > [Unit]
	I0914 19:05:43.047292   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:05:43.047300   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:05:43.047311   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:05:43.047321   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:05:43.047330   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:05:43.047340   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:05:43.047347   29302 command_runner.go:130] > [Service]
	I0914 19:05:43.047354   29302 command_runner.go:130] > Type=notify
	I0914 19:05:43.047374   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:05:43.047387   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:05:43.047408   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:05:43.047423   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:05:43.047437   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:05:43.047453   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:05:43.047465   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:05:43.047478   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:05:43.047499   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:05:43.047514   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:05:43.047523   29302 command_runner.go:130] > ExecStart=
	I0914 19:05:43.047549   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:05:43.047562   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:05:43.047574   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:05:43.047589   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:05:43.047600   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:05:43.047609   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:05:43.047619   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:05:43.047632   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:05:43.047647   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:05:43.047657   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:05:43.047668   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:05:43.047682   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:05:43.047692   29302 command_runner.go:130] > Delegate=yes
	I0914 19:05:43.047706   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:05:43.047716   29302 command_runner.go:130] > KillMode=process
	I0914 19:05:43.047721   29302 command_runner.go:130] > [Install]
	I0914 19:05:43.047732   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:05:43.047831   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.059348   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:05:43.076586   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:05:43.091070   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.103630   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:05:43.127566   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:05:43.140558   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:05:43.157218   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:05:43.157773   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:05:43.161227   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:05:43.161332   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:05:43.168999   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:05:43.184057   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:05:43.293264   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:05:43.399283   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:05:43.399314   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:05:43.416580   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:43.527824   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:05:43.992016   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.097079   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:05:44.209025   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:05:44.320513   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.428053   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:05:44.444720   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:05:44.552820   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0914 19:05:44.632416   29302 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 19:05:44.632491   29302 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 19:05:44.638252   29302 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0914 19:05:44.638276   29302 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0914 19:05:44.638286   29302 command_runner.go:130] > Device: 16h/22d	Inode: 831         Links: 1
	I0914 19:05:44.638296   29302 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0914 19:05:44.638305   29302 command_runner.go:130] > Access: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638313   29302 command_runner.go:130] > Modify: 2023-09-14 19:05:44.514543091 +0000
	I0914 19:05:44.638326   29302 command_runner.go:130] > Change: 2023-09-14 19:05:44.517543091 +0000
	I0914 19:05:44.638332   29302 command_runner.go:130] >  Birth: -
	I0914 19:05:44.638715   29302 start.go:537] Will wait 60s for crictl version
	I0914 19:05:44.638765   29302 ssh_runner.go:195] Run: which crictl
	I0914 19:05:44.642939   29302 command_runner.go:130] > /usr/bin/crictl
	I0914 19:05:44.643309   29302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 19:05:44.681642   29302 command_runner.go:130] > Version:  0.1.0
	I0914 19:05:44.681667   29302 command_runner.go:130] > RuntimeName:  docker
	I0914 19:05:44.681672   29302 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0914 19:05:44.681678   29302 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683160   29302 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0914 19:05:44.683219   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.707204   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.708405   29302 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 19:05:44.736598   29302 command_runner.go:130] > 24.0.6
	I0914 19:05:44.738686   29302 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0914 19:05:44.738719   29302 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:05:44.741297   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741690   29302 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:05:44.741717   29302 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:05:44.741894   29302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0914 19:05:44.745777   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.758482   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:05:44.758533   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.777353   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.777369   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.777375   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.777380   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.777385   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.777389   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.777395   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.777399   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.777404   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.777409   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.777499   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.777521   29302 docker.go:566] Images already preloaded, skipping extraction
	I0914 19:05:44.777580   29302 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 19:05:44.796442   29302 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
	I0914 19:05:44.796466   29302 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
	I0914 19:05:44.796474   29302 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
	I0914 19:05:44.796487   29302 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
	I0914 19:05:44.796495   29302 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0914 19:05:44.796502   29302 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0914 19:05:44.796510   29302 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0914 19:05:44.796517   29302 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0914 19:05:44.796526   29302 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 19:05:44.796533   29302 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0914 19:05:44.796582   29302 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0914 19:05:44.796603   29302 cache_images.go:84] Images are preloaded, skipping loading
	I0914 19:05:44.796662   29302 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 19:05:44.826844   29302 command_runner.go:130] > cgroupfs
	I0914 19:05:44.827994   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:05:44.828012   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:05:44.828028   29302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0914 19:05:44.828050   29302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.14 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-040952 NodeName:multinode-040952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 19:05:44.828163   29302 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-040952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 19:05:44.828241   29302 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-040952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0914 19:05:44.828290   29302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0914 19:05:44.837426   29302 command_runner.go:130] > kubeadm
	I0914 19:05:44.837444   29302 command_runner.go:130] > kubectl
	I0914 19:05:44.837448   29302 command_runner.go:130] > kubelet
	I0914 19:05:44.837478   29302 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 19:05:44.837538   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 19:05:44.845710   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0914 19:05:44.861289   29302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 19:05:44.876364   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0914 19:05:44.892748   29302 ssh_runner.go:195] Run: grep 192.168.39.14	control-plane.minikube.internal$ /etc/hosts
	I0914 19:05:44.896225   29302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 19:05:44.908521   29302 certs.go:56] Setting up /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952 for IP: 192.168.39.14
	I0914 19:05:44.908554   29302 certs.go:190] acquiring lock for shared ca certs: {Name:mk8231a646ae91c44c394a9ea29f867fd3f74220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:44.908702   29302 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key
	I0914 19:05:44.908750   29302 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key
	I0914 19:05:44.908825   29302 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key
	I0914 19:05:44.908896   29302 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key.ba52ec04
	I0914 19:05:44.908936   29302 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key
	I0914 19:05:44.908959   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 19:05:44.908984   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 19:05:44.909003   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 19:05:44.909021   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 19:05:44.909038   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 19:05:44.909057   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 19:05:44.909069   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 19:05:44.909083   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 19:05:44.909133   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem (1338 bytes)
	W0914 19:05:44.909164   29302 certs.go:433] ignoring /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506_empty.pem, impossibly tiny 0 bytes
	I0914 19:05:44.909175   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 19:05:44.909194   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem (1082 bytes)
	I0914 19:05:44.909221   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem (1123 bytes)
	I0914 19:05:44.909246   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem (1679 bytes)
	I0914 19:05:44.909284   29302 certs.go:437] found cert: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:05:44.909309   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem -> /usr/share/ca-certificates/14506.pem
	I0914 19:05:44.909322   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /usr/share/ca-certificates/145062.pem
	I0914 19:05:44.909336   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:44.909846   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0914 19:05:44.934419   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 19:05:44.957511   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 19:05:44.980559   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 19:05:45.004923   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 19:05:45.028375   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 19:05:45.051817   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 19:05:45.074510   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 19:05:45.098260   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/14506.pem --> /usr/share/ca-certificates/14506.pem (1338 bytes)
	I0914 19:05:45.121292   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /usr/share/ca-certificates/145062.pem (1708 bytes)
	I0914 19:05:45.144038   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 19:05:45.166026   29302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 19:05:45.181807   29302 ssh_runner.go:195] Run: openssl version
	I0914 19:05:45.187376   29302 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0914 19:05:45.187428   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14506.pem && ln -fs /usr/share/ca-certificates/14506.pem /etc/ssl/certs/14506.pem"
	I0914 19:05:45.196849   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201160   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201218   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 14 18:48 /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.201259   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14506.pem
	I0914 19:05:45.206455   29302 command_runner.go:130] > 51391683
	I0914 19:05:45.206657   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14506.pem /etc/ssl/certs/51391683.0"
	I0914 19:05:45.216148   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145062.pem && ln -fs /usr/share/ca-certificates/145062.pem /etc/ssl/certs/145062.pem"
	I0914 19:05:45.225498   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229584   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229749   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 14 18:48 /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.229794   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145062.pem
	I0914 19:05:45.235209   29302 command_runner.go:130] > 3ec20f2e
	I0914 19:05:45.235283   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145062.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 19:05:45.244557   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 19:05:45.253825   29302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258352   29302 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258379   29302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 14 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.258421   29302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 19:05:45.263679   29302 command_runner.go:130] > b5213941
	I0914 19:05:45.263724   29302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 19:05:45.273201   29302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0914 19:05:45.277387   29302 command_runner.go:130] > ca.crt
	I0914 19:05:45.277404   29302 command_runner.go:130] > ca.key
	I0914 19:05:45.277412   29302 command_runner.go:130] > healthcheck-client.crt
	I0914 19:05:45.277419   29302 command_runner.go:130] > healthcheck-client.key
	I0914 19:05:45.277426   29302 command_runner.go:130] > peer.crt
	I0914 19:05:45.277433   29302 command_runner.go:130] > peer.key
	I0914 19:05:45.277439   29302 command_runner.go:130] > server.crt
	I0914 19:05:45.277446   29302 command_runner.go:130] > server.key
	I0914 19:05:45.277502   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 19:05:45.283251   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.283310   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 19:05:45.289331   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.289405   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 19:05:45.295261   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.295329   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 19:05:45.300680   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.300910   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 19:05:45.306424   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.306599   29302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 19:05:45.311906   29302 command_runner.go:130] > Certificate will not expire
	I0914 19:05:45.312249   29302 kubeadm.go:404] StartCluster: {Name:multinode-040952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-040952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 19:05:45.312423   29302 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:45.331162   29302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 19:05:45.340190   29302 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0914 19:05:45.340212   29302 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0914 19:05:45.340221   29302 command_runner.go:130] > /var/lib/minikube/etcd:
	I0914 19:05:45.340226   29302 command_runner.go:130] > member
	I0914 19:05:45.340246   29302 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0914 19:05:45.340267   29302 kubeadm.go:636] restartCluster start
	I0914 19:05:45.340309   29302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 19:05:45.348452   29302 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.348894   29302 kubeconfig.go:135] verify returned: extract IP: "multinode-040952" does not appear in /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.348998   29302 kubeconfig.go:146] "multinode-040952" context is missing from /home/jenkins/minikube-integration/17217-7285/kubeconfig - will repair!
	I0914 19:05:45.349266   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:05:45.349662   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:05:45.349849   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:05:45.350444   29302 cert_rotation.go:137] Starting client certificate rotation controller
	I0914 19:05:45.350587   29302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 19:05:45.358418   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.358456   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.368403   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.368429   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.368512   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.378454   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:45.879114   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:45.879187   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:45.890404   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.379073   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.379137   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.390460   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:46.878635   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:46.878712   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:46.890234   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.378771   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.378861   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.390972   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:47.879569   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:47.879636   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:47.891015   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.378618   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.378691   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.390037   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:48.878591   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:48.878656   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:48.889682   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.379283   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.379348   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.390298   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:49.878830   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:49.878929   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:49.890070   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.378594   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.378669   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.389750   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:50.879406   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:50.879474   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:50.890792   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.378749   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.378818   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.390362   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:51.878913   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:51.878983   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:51.890684   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.379313   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.390412   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:52.878965   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:52.879054   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:52.890079   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.378659   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.378734   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.389835   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:53.879480   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:53.879549   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:53.890643   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.379316   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.379396   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.390543   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:54.879126   29302 api_server.go:166] Checking apiserver status ...
	I0914 19:05:54.879190   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0914 19:05:54.890939   29302 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0914 19:05:55.358694   29302 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0914 19:05:55.358719   29302 kubeadm.go:1128] stopping kube-system containers ...
	I0914 19:05:55.358774   29302 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 19:05:55.380728   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.380744   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.380748   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.380752   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.380756   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.380760   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.380764   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.380768   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.380771   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.380776   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.380780   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.380783   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.380787   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.380790   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.380794   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.380798   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.381007   29302 docker.go:462] Stopping containers: [5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de]
	I0914 19:05:55.381063   29302 ssh_runner.go:195] Run: docker stop 5ca168b256ec bda018c9a602 fb2dbcea99e9 2de9c2baa72f 1dac2d18ee96 bd14e8416f22 2c6b193d8f06 ac89590af9af e7dd2a8d2bf2 79de1cbad023 bdae306df774 7ae1932584ff 3204588282f3 c60a4b7edf2a bf69af78fefd 992d221cf3de
	I0914 19:05:55.400500   29302 command_runner.go:130] > 5ca168b256ec
	I0914 19:05:55.400523   29302 command_runner.go:130] > bda018c9a602
	I0914 19:05:55.400528   29302 command_runner.go:130] > fb2dbcea99e9
	I0914 19:05:55.400532   29302 command_runner.go:130] > 2de9c2baa72f
	I0914 19:05:55.400537   29302 command_runner.go:130] > 1dac2d18ee96
	I0914 19:05:55.400545   29302 command_runner.go:130] > bd14e8416f22
	I0914 19:05:55.400549   29302 command_runner.go:130] > 2c6b193d8f06
	I0914 19:05:55.400915   29302 command_runner.go:130] > ac89590af9af
	I0914 19:05:55.400933   29302 command_runner.go:130] > e7dd2a8d2bf2
	I0914 19:05:55.400941   29302 command_runner.go:130] > 79de1cbad023
	I0914 19:05:55.400947   29302 command_runner.go:130] > bdae306df774
	I0914 19:05:55.400953   29302 command_runner.go:130] > 7ae1932584ff
	I0914 19:05:55.400959   29302 command_runner.go:130] > 3204588282f3
	I0914 19:05:55.400965   29302 command_runner.go:130] > c60a4b7edf2a
	I0914 19:05:55.400970   29302 command_runner.go:130] > bf69af78fefd
	I0914 19:05:55.400976   29302 command_runner.go:130] > 992d221cf3de
	I0914 19:05:55.402045   29302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0914 19:05:55.416372   29302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 19:05:55.424910   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0914 19:05:55.424932   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0914 19:05:55.424943   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0914 19:05:55.424952   29302 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.424980   29302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 19:05:55.425021   29302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433299   29302 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0914 19:05:55.433317   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.549527   29302 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 19:05:55.549554   29302 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0914 19:05:55.549564   29302 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0914 19:05:55.549574   29302 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0914 19:05:55.549583   29302 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0914 19:05:55.549599   29302 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0914 19:05:55.549609   29302 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0914 19:05:55.549615   29302 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0914 19:05:55.549624   29302 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0914 19:05:55.549633   29302 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0914 19:05:55.549640   29302 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0914 19:05:55.549657   29302 command_runner.go:130] > [certs] Using the existing "sa" key
	I0914 19:05:55.549745   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:55.598988   29302 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 19:05:55.824313   29302 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 19:05:55.900894   29302 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 19:05:56.276915   29302 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 19:05:56.339928   29302 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 19:05:56.342661   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.405203   29302 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 19:05:56.406633   29302 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 19:05:56.407055   29302 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0914 19:05:56.524034   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.589683   29302 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 19:05:56.589714   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 19:05:56.593812   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 19:05:56.595032   29302 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 19:05:56.597321   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:05:56.696497   29302 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 19:05:56.699815   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:05:56.699898   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:56.713289   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.226345   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:57.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.226095   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:58.726390   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.226644   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:05:59.241067   29302 command_runner.go:130] > 1693
	I0914 19:05:59.241381   29302 api_server.go:72] duration metric: took 2.541565826s to wait for apiserver process to appear ...
	I0914 19:05:59.241402   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:05:59.241422   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.195757   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0914 19:06:02.195786   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0914 19:06:02.195796   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.307219   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.307250   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:02.807963   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:02.814842   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:02.814876   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.307503   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.315888   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0914 19:06:03.315914   29302 api_server.go:103] status: https://192.168.39.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0914 19:06:03.807505   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:03.812721   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:03.812788   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:03.812794   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:03.812802   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:03.812809   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:03.821345   29302 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 19:06:03.821376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:03.821387   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:03.821396   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:03.821402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:03 GMT
	I0914 19:06:03.821410   29302 round_trippers.go:580]     Audit-Id: a2a9e97f-3007-4290-8f99-481d06fc6049
	I0914 19:06:03.821417   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:03.821424   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:03.821433   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:03.821483   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:03.821569   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:03.821589   29302 api_server.go:131] duration metric: took 4.580178903s to wait for apiserver health ...
	I0914 19:06:03.821600   29302 cni.go:84] Creating CNI manager for ""
	I0914 19:06:03.821611   29302 cni.go:136] 3 nodes found, recommending kindnet
	I0914 19:06:03.823525   29302 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 19:06:03.825085   29302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 19:06:03.832345   29302 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0914 19:06:03.832364   29302 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0914 19:06:03.832370   29302 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0914 19:06:03.832380   29302 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0914 19:06:03.832391   29302 command_runner.go:130] > Access: 2023-09-14 19:05:33.824543091 +0000
	I0914 19:06:03.832399   29302 command_runner.go:130] > Modify: 2023-09-12 03:24:25.000000000 +0000
	I0914 19:06:03.832416   29302 command_runner.go:130] > Change: 2023-09-14 19:05:31.874543091 +0000
	I0914 19:06:03.832422   29302 command_runner.go:130] >  Birth: -
	I0914 19:06:03.832466   29302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0914 19:06:03.832475   29302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0914 19:06:03.901488   29302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 19:06:05.205755   29302 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.209188   29302 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0914 19:06:05.212024   29302 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0914 19:06:05.225376   29302 command_runner.go:130] > daemonset.apps/kindnet configured
	I0914 19:06:05.229823   29302 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.32829993s)
	I0914 19:06:05.229853   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:05.229964   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.229975   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.229982   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.229988   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.234117   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.234139   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.234149   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.234158   29302 round_trippers.go:580]     Audit-Id: 78bdb13b-ed79-4db3-8008-4289bacf78fd
	I0914 19:06:05.234172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.234180   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.234188   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.234195   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.236145   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.239946   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:05.239984   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 19:06:05.239998   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0914 19:06:05.240008   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0914 19:06:05.240015   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:05.240026   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:05.240036   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 19:06:05.240054   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 19:06:05.240067   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:05.240073   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:05.240087   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0914 19:06:05.240101   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0914 19:06:05.240113   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0914 19:06:05.240123   29302 system_pods.go:74] duration metric: took 10.263188ms to wait for pod list to return data ...
	I0914 19:06:05.240135   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:05.240193   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:05.240202   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.240212   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.240223   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.245363   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:05.245382   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.245393   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.245401   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.245416   29302 round_trippers.go:580]     Audit-Id: ee9162aa-d308-4bb2-927d-55e7e1011d87
	I0914 19:06:05.245424   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.245435   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.245471   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.245800   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13790 chars]
	I0914 19:06:05.246934   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246965   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.246982   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.246996   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247002   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:05.247012   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:05.247020   29302 node_conditions.go:105] duration metric: took 6.879016ms to run NodePressure ...
	I0914 19:06:05.247043   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0914 19:06:05.487041   29302 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0914 19:06:05.487069   29302 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0914 19:06:05.487097   29302 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0914 19:06:05.487490   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0914 19:06:05.487506   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.487516   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.487526   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.491797   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:05.491820   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.491831   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.491840   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.491848   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.491857   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.491866   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.491875   29302 round_trippers.go:580]     Audit-Id: 9814298e-c189-437e-bfca-dbe0a19423d2
	I0914 19:06:05.492280   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29761 chars]
	I0914 19:06:05.493221   29302 kubeadm.go:787] kubelet initialised
	I0914 19:06:05.493240   29302 kubeadm.go:788] duration metric: took 6.131207ms waiting for restarted kubelet to initialise ...
	I0914 19:06:05.493249   29302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:05.493307   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:05.493322   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.493334   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.493347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.496849   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:05.496867   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.496876   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.496885   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.496892   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.496901   29302 round_trippers.go:580]     Audit-Id: a7031aa1-24df-4c90-9e52-85f8f96f783c
	I0914 19:06:05.496912   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.496921   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.497873   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"797"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84544 chars]
	I0914 19:06:05.500273   29302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.500335   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:05.500343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.500350   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.500356   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.502411   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.502429   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.502441   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.502449   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.502459   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.502469   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.502478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.502490   29302 round_trippers.go:580]     Audit-Id: f347830a-65d2-4cb4-8423-8b8fc5cc870f
	I0914 19:06:05.502830   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:05.503304   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.503318   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.503328   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.503337   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.505839   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.505853   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.505864   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.505870   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.505875   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.505880   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.505886   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.505894   29302 round_trippers.go:580]     Audit-Id: 71902073-b1b8-4c71-b1d1-af71d48217f1
	I0914 19:06:05.506071   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.506467   29302 pod_ready.go:97] node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506490   29302 pod_ready.go:81] duration metric: took 6.199179ms waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.506501   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.506518   29302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.506572   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:05.506583   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.506593   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.506606   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.508379   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.508391   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.508397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.508403   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.508408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.508414   29302 round_trippers.go:580]     Audit-Id: adfe03d4-2812-4ba5-98dd-67afaa529395
	I0914 19:06:05.508419   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.508425   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.508772   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:05.509094   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.509104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.509111   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.509116   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.510985   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.511003   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.511012   29302 round_trippers.go:580]     Audit-Id: 0ee321ba-916a-449f-a719-2eb1a4973cde
	I0914 19:06:05.511019   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.511028   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.511036   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.511044   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.511057   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.511184   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.511454   29302 pod_ready.go:97] node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511470   29302 pod_ready.go:81] duration metric: took 4.945047ms waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.511477   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "etcd-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.511489   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.511533   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:05.511540   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.511546   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.511552   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.513172   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.513189   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.513198   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.513206   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.513213   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.513222   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.513230   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.513246   29302 round_trippers.go:580]     Audit-Id: 98886ad5-cb3e-42c1-9236-b75a8e09f5f5
	I0914 19:06:05.513380   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"786","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7850 chars]
	I0914 19:06:05.513760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.513773   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.513780   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.513786   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.515437   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:05.515456   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.515464   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.515472   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.515481   29302 round_trippers.go:580]     Audit-Id: cc794f2f-df9b-4b8c-8271-303fbb3bda2a
	I0914 19:06:05.515489   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.515502   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.515510   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.515753   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.516001   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516014   29302 pod_ready.go:81] duration metric: took 4.515313ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.516021   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-apiserver-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.516027   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.516066   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:05.516073   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.516080   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.516086   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.518245   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.518263   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.518277   29302 round_trippers.go:580]     Audit-Id: 6779b7f0-25f9-49d1-be85-87a44d8c3552
	I0914 19:06:05.518286   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.518294   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.518301   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.518314   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.518322   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.518564   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"783","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7436 chars]
	I0914 19:06:05.630264   29302 request.go:629] Waited for 111.324976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630352   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:05.630359   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.630372   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.630382   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.632981   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.633000   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.633006   29302 round_trippers.go:580]     Audit-Id: fd7872d6-edd4-429f-97f2-b2ec1c12de54
	I0914 19:06:05.633012   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.633017   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.633023   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.633028   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.633036   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.633196   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:05.633629   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633656   29302 pod_ready.go:81] duration metric: took 117.619154ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:05.633669   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-controller-manager-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:05.633680   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:05.830043   29302 request.go:629] Waited for 196.287848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830099   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:05.830103   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:05.830111   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:05.830118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:05.832762   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:05.832785   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:05.832794   29302 round_trippers.go:580]     Audit-Id: 3c18be9a-6c71-4025-be83-5fc9c53246a5
	I0914 19:06:05.832801   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:05.832808   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:05.832815   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:05.832822   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:05.832829   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:05.833118   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:06.029994   29302 request.go:629] Waited for 196.460915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030079   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:06.030087   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.030099   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.030108   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.032502   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.032520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.032527   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:05 GMT
	I0914 19:06:06.032532   29302 round_trippers.go:580]     Audit-Id: 9d3f52cf-02ab-4abb-92c1-8a7d06224f0e
	I0914 19:06:06.032538   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.032542   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.032547   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.032553   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.032888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:06.033151   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.033165   29302 pod_ready.go:81] duration metric: took 399.477836ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.033173   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.230655   29302 request.go:629] Waited for 197.428191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230712   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:06.230718   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.230725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.230733   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.233365   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.233384   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.233391   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.233397   29302 round_trippers.go:580]     Audit-Id: 53af8c6b-f3d3-4507-ba18-bcb4d7a95376
	I0914 19:06:06.233406   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.233422   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.233431   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.233443   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.233771   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:06.430710   29302 request.go:629] Waited for 196.348215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430762   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:06.430769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.430779   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.430788   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.433906   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:06.433930   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.433942   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.433951   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.433960   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.433969   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.433985   29302 round_trippers.go:580]     Audit-Id: 1280bf02-d81c-4bca-b4e5-275129840268
	I0914 19:06:06.433994   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.434112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"772","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3204 chars]
	I0914 19:06:06.434453   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:06.434474   29302 pod_ready.go:81] duration metric: took 401.294532ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.434488   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:06.630939   29302 request.go:629] Waited for 196.385647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631022   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:06.631030   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.631042   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.631051   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.633497   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.633520   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.633530   29302 round_trippers.go:580]     Audit-Id: 1dc1f940-384d-494a-8e64-361f1ad205ba
	I0914 19:06:06.633543   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.633552   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.633562   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.633573   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.633584   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.633766   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"788","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5928 chars]
	I0914 19:06:06.830679   29302 request.go:629] Waited for 196.393813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830735   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:06.830740   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:06.830747   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:06.830754   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:06.833354   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:06.833375   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:06.833382   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:06.833387   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:06.833392   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:06.833397   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:06.833402   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:06.833407   29302 round_trippers.go:580]     Audit-Id: a24b66f4-fa51-4df4-9bc5-590f310c8108
	I0914 19:06:06.833985   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:06.834382   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834408   29302 pod_ready.go:81] duration metric: took 399.910926ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:06.834420   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-proxy-hbsmt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:06.834433   29302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:07.030857   29302 request.go:629] Waited for 196.352242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030940   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:07.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.030964   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.030977   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.034225   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.034245   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.034253   29302 round_trippers.go:580]     Audit-Id: 71cfae50-3c69-4f2b-8709-aad710c8dec2
	I0914 19:06:07.034260   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.034268   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.034276   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.034289   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.034298   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:06 GMT
	I0914 19:06:07.034501   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:07.230128   29302 request.go:629] Waited for 195.265564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230211   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.230221   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.230229   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.230235   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.233612   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.233631   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.233641   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.233648   29302 round_trippers.go:580]     Audit-Id: c6e16c92-92f1-4f61-b0d2-523db2c467d1
	I0914 19:06:07.233656   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.233665   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.233675   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.233684   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.234058   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.234344   29302 pod_ready.go:97] node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234368   29302 pod_ready.go:81] duration metric: took 399.923264ms waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	E0914 19:06:07.234381   29302 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-040952" hosting pod "kube-scheduler-multinode-040952" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-040952" has status "Ready":"False"
	I0914 19:06:07.234393   29302 pod_ready.go:38] duration metric: took 1.741133779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:07.234417   29302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 19:06:07.250231   29302 command_runner.go:130] > -16
	I0914 19:06:07.250255   29302 ops.go:34] apiserver oom_adj: -16
	I0914 19:06:07.250263   29302 kubeadm.go:640] restartCluster took 21.909989817s
	I0914 19:06:07.250271   29302 kubeadm.go:406] StartCluster complete in 21.938026901s
	I0914 19:06:07.250290   29302 settings.go:142] acquiring lock: {Name:mkaf2d84e9fceec2029b98353d3d8cae1b369e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.250389   29302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.251059   29302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/kubeconfig: {Name:mkd810f3a7b7ee0c3e3eff94a19f3da881e8200c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 19:06:07.251279   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 19:06:07.251383   29302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0914 19:06:07.253531   29302 out.go:177] * Enabled addons: 
	I0914 19:06:07.251517   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:07.251534   29302 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 19:06:07.255467   29302 addons.go:502] enable addons completed in 4.093858ms: enabled=[]
	I0914 19:06:07.255670   29302 kapi.go:59] client config for multinode-040952: &rest.Config{Host:"https://192.168.39.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.crt", KeyFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/client.key", CAFile:"/home/jenkins/minikube-integration/17217-7285/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 19:06:07.255997   29302 round_trippers.go:463] GET https://192.168.39.14:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0914 19:06:07.256010   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.256017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.256025   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.263309   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:07.263329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.263340   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.263348   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.263354   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.263359   29302 round_trippers.go:580]     Content-Length: 291
	I0914 19:06:07.263365   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.263370   29302 round_trippers.go:580]     Audit-Id: 5a75d744-b3cd-40e6-abf4-7b1c8daac075
	I0914 19:06:07.263377   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.263397   29302 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9776e459-4280-488a-924c-4e921bbd9495","resourceVersion":"796","creationTimestamp":"2023-09-14T19:01:40Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0914 19:06:07.263508   29302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-040952" context rescaled to 1 replicas
	I0914 19:06:07.263529   29302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.14 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 19:06:07.264985   29302 out.go:177] * Verifying Kubernetes components...
	I0914 19:06:07.266359   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:07.389385   29302 command_runner.go:130] > apiVersion: v1
	I0914 19:06:07.389403   29302 command_runner.go:130] > data:
	I0914 19:06:07.389408   29302 command_runner.go:130] >   Corefile: |
	I0914 19:06:07.389411   29302 command_runner.go:130] >     .:53 {
	I0914 19:06:07.389415   29302 command_runner.go:130] >         log
	I0914 19:06:07.389421   29302 command_runner.go:130] >         errors
	I0914 19:06:07.389425   29302 command_runner.go:130] >         health {
	I0914 19:06:07.389429   29302 command_runner.go:130] >            lameduck 5s
	I0914 19:06:07.389433   29302 command_runner.go:130] >         }
	I0914 19:06:07.389437   29302 command_runner.go:130] >         ready
	I0914 19:06:07.389443   29302 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0914 19:06:07.389447   29302 command_runner.go:130] >            pods insecure
	I0914 19:06:07.389455   29302 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0914 19:06:07.389473   29302 command_runner.go:130] >            ttl 30
	I0914 19:06:07.389477   29302 command_runner.go:130] >         }
	I0914 19:06:07.389483   29302 command_runner.go:130] >         prometheus :9153
	I0914 19:06:07.389487   29302 command_runner.go:130] >         hosts {
	I0914 19:06:07.389493   29302 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0914 19:06:07.389497   29302 command_runner.go:130] >            fallthrough
	I0914 19:06:07.389501   29302 command_runner.go:130] >         }
	I0914 19:06:07.389508   29302 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0914 19:06:07.389513   29302 command_runner.go:130] >            max_concurrent 1000
	I0914 19:06:07.389517   29302 command_runner.go:130] >         }
	I0914 19:06:07.389520   29302 command_runner.go:130] >         cache 30
	I0914 19:06:07.389527   29302 command_runner.go:130] >         loop
	I0914 19:06:07.389532   29302 command_runner.go:130] >         reload
	I0914 19:06:07.389541   29302 command_runner.go:130] >         loadbalance
	I0914 19:06:07.389549   29302 command_runner.go:130] >     }
	I0914 19:06:07.389558   29302 command_runner.go:130] > kind: ConfigMap
	I0914 19:06:07.389564   29302 command_runner.go:130] > metadata:
	I0914 19:06:07.389573   29302 command_runner.go:130] >   creationTimestamp: "2023-09-14T19:01:40Z"
	I0914 19:06:07.389585   29302 command_runner.go:130] >   name: coredns
	I0914 19:06:07.389594   29302 command_runner.go:130] >   namespace: kube-system
	I0914 19:06:07.389604   29302 command_runner.go:130] >   resourceVersion: "404"
	I0914 19:06:07.389612   29302 command_runner.go:130] >   uid: 77b79b35-a304-4075-b4c4-6b8a52cfe75c
	I0914 19:06:07.389643   29302 node_ready.go:35] waiting up to 6m0s for node "multinode-040952" to be "Ready" ...
	I0914 19:06:07.389797   29302 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0914 19:06:07.431021   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.431047   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.431059   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.431069   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.434336   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:07.434359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.434367   29302 round_trippers.go:580]     Audit-Id: f0218504-ef8b-4fee-a836-3f16c97e6d1d
	I0914 19:06:07.434372   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.434378   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.434383   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.434389   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.434399   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.434888   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:07.630657   29302 request.go:629] Waited for 195.358734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630713   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:07.630720   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:07.630729   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:07.630738   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:07.635002   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:07.635021   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:07.635027   29302 round_trippers.go:580]     Audit-Id: 0e51cba7-34eb-44c3-be48-8785725a128f
	I0914 19:06:07.635033   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:07.635038   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:07.635043   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:07.635048   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:07.635053   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:07 GMT
	I0914 19:06:07.635788   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.136884   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.136903   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.136913   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.136919   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.140137   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:08.140160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.140168   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.140173   29302 round_trippers.go:580]     Audit-Id: 9ec77217-1afd-42b6-aaf7-211e85629e48
	I0914 19:06:08.140179   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.140184   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.140189   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.140194   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.140344   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:08.637040   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:08.637079   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:08.637091   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:08.637101   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:08.639714   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:08.639733   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:08.639744   29302 round_trippers.go:580]     Audit-Id: d47f9fd4-8dec-46b1-8ce9-436c0350c5ca
	I0914 19:06:08.639752   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:08.639760   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:08.639769   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:08.639779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:08.639788   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:08 GMT
	I0914 19:06:08.640112   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.136649   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.136682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.136690   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.136696   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.139686   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.139704   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.139715   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.139724   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.139733   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.139739   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.139745   29302 round_trippers.go:580]     Audit-Id: ae97ecdc-ac59-4df9-80fb-ab01ff2852ec
	I0914 19:06:09.139750   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.140167   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.636845   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:09.636866   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:09.636874   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:09.636880   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:09.639508   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:09.639525   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:09.639534   29302 round_trippers.go:580]     Audit-Id: 2a2efe7f-361b-45a2-b3cb-a7e9e84043e9
	I0914 19:06:09.639541   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:09.639549   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:09.639558   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:09.639568   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:09.639578   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:09 GMT
	I0914 19:06:09.639997   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"782","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5283 chars]
	I0914 19:06:09.640405   29302 node_ready.go:58] node "multinode-040952" has status "Ready":"False"
	I0914 19:06:10.136599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.136624   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.136638   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.136648   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.140273   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.140297   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.140306   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.140313   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.140320   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.140332   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.140340   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.140347   29302 round_trippers.go:580]     Audit-Id: 1af6dc6d-a25f-4a81-86a3-d239224c606e
	I0914 19:06:10.140506   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.140798   29302 node_ready.go:49] node "multinode-040952" has status "Ready":"True"
	I0914 19:06:10.140815   29302 node_ready.go:38] duration metric: took 2.751153874s waiting for node "multinode-040952" to be "Ready" ...
	I0914 19:06:10.140825   29302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:10.140877   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:10.140887   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.140897   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.140907   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.145518   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:10.145535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.145542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.145547   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.145557   29302 round_trippers.go:580]     Audit-Id: d738ec8e-27bb-4210-8329-89e64df5055c
	I0914 19:06:10.145569   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.145579   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.145590   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.146881   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"868"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83954 chars]
	I0914 19:06:10.149263   29302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:10.149331   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.149342   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.149353   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.149364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.151221   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.151235   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.151241   29302 round_trippers.go:580]     Audit-Id: 9dce5aa8-17a9-43c4-9448-421e8ef000fe
	I0914 19:06:10.151247   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.151255   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.151264   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.151281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.151288   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.151447   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.151815   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.151829   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.151839   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.151847   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.154035   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.154047   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.154053   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.154058   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.154063   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.154069   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.154075   29302 round_trippers.go:580]     Audit-Id: f451201e-e118-40ff-8809-e06aa3aa8567
	I0914 19:06:10.154084   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.154352   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.154718   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.154731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.154742   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.154752   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.156468   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.156482   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.156491   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.156501   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.156513   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.156524   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.156538   29302 round_trippers.go:580]     Audit-Id: 056aca82-7d21-4539-9de8-316f54300fbb
	I0914 19:06:10.156548   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.156671   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.157120   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.157136   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.157147   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.157162   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.159000   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:10.159014   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.159023   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.159031   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.159039   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.159049   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.159059   29302 round_trippers.go:580]     Audit-Id: 053f7e6a-3d64-496b-a692-e6d8d7de77dc
	I0914 19:06:10.159074   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.159292   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:10.660315   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:10.660343   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.660354   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.660364   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.662669   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:10.662688   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.662694   29302 round_trippers.go:580]     Audit-Id: 0b5959bf-4f92-40f5-bff0-64259ee8d0e9
	I0914 19:06:10.662703   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.662711   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.662723   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.662732   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.662744   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.663162   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:10.663793   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:10.663810   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:10.663822   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:10.663830   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:10.667280   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:10.667294   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:10.667299   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:10.667304   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:10.667310   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:10 GMT
	I0914 19:06:10.667315   29302 round_trippers.go:580]     Audit-Id: adc471fd-2452-48eb-9634-4a15a4129e27
	I0914 19:06:10.667320   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:10.667325   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:10.667519   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.160702   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.160731   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.160744   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.160753   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.164208   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.164227   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.164234   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.164240   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.164261   29302 round_trippers.go:580]     Audit-Id: 3b81510c-ceb9-488e-bc2e-b21d77b051e2
	I0914 19:06:11.164273   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.164281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.164290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.164555   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.165152   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.165174   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.165187   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.165197   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.168098   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:11.168117   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.168125   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.168133   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.168142   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.168151   29302 round_trippers.go:580]     Audit-Id: 15145bd3-b367-4e99-b3ce-0ae58ef5c733
	I0914 19:06:11.168161   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.168168   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.168530   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:11.660168   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:11.660193   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.660205   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.660216   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.663403   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:11.663424   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.663434   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.663442   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.663449   29302 round_trippers.go:580]     Audit-Id: 3362ce2b-8605-45fd-8885-3eaeb408ef56
	I0914 19:06:11.663457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.663466   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.663476   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.664334   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:11.664760   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:11.664775   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:11.664785   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:11.664795   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:11.671505   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:11.671522   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:11.671530   29302 round_trippers.go:580]     Audit-Id: 654293a2-0981-4bec-9543-4726a90c72a3
	I0914 19:06:11.671539   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:11.671551   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:11.671560   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:11.671567   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:11.671576   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:11 GMT
	I0914 19:06:11.671723   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.160486   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.160512   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.160524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.160534   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.163604   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:12.163624   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.163634   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.163644   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.163652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.163661   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.163674   29302 round_trippers.go:580]     Audit-Id: 746f41fe-b54a-4602-ba74-6665d07e9fc7
	I0914 19:06:12.163683   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.164257   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.164698   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.164712   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.164721   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.164731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.166907   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.166920   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.166926   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.166934   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.166942   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.166953   29302 round_trippers.go:580]     Audit-Id: e83a6e6d-40cb-4779-8c0a-8f5c050ff286
	I0914 19:06:12.166961   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.166970   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.167376   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:12.167641   29302 pod_ready.go:102] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"False"
	I0914 19:06:12.660012   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:12.660034   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.660051   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.660059   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.664300   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:12.664327   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.664338   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.664345   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.664352   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.664360   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.664369   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.664384   29302 round_trippers.go:580]     Audit-Id: 49e3af30-584c-4ef5-942f-2f32701b7bc7
	I0914 19:06:12.665270   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"790","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6544 chars]
	I0914 19:06:12.665705   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:12.665719   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:12.665729   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:12.665738   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:12.668068   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:12.668088   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:12.668097   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:12.668105   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:12.668112   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:12 GMT
	I0914 19:06:12.668120   29302 round_trippers.go:580]     Audit-Id: 28f046b6-f759-4197-80f7-730e48f958ff
	I0914 19:06:12.668128   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:12.668142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:12.668260   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.159876   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qrv2r
	I0914 19:06:13.159904   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.159912   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.159918   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.163892   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:13.163917   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.163928   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.163937   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.163944   29302 round_trippers.go:580]     Audit-Id: 2bafd162-6571-48ef-8c6f-4b72770d2047
	I0914 19:06:13.163952   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.163966   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.163976   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.165138   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0914 19:06:13.165753   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.165771   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.165782   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.165791   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.168088   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.168105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.168112   29302 round_trippers.go:580]     Audit-Id: 767659c2-2c07-4c69-b006-9d19ff6d9f6d
	I0914 19:06:13.168118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.168123   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.168128   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.168135   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.168143   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.168401   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.168681   29302 pod_ready.go:92] pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:13.168695   29302 pod_ready.go:81] duration metric: took 3.01941396s waiting for pod "coredns-5dd5756b68-qrv2r" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168703   29302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:13.168801   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.168814   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.168832   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.168846   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.171347   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.171368   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.171375   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.171380   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.171388   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.171397   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.171404   29302 round_trippers.go:580]     Audit-Id: b18d0768-dc31-460c-beed-e50e3a19d6cf
	I0914 19:06:13.171411   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.172044   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.172379   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.172391   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.172399   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.172405   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.175143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.175157   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.175163   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.175168   29302 round_trippers.go:580]     Audit-Id: f6242de5-c366-4c79-aa4f-5b2c5ce0d01e
	I0914 19:06:13.175174   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.175182   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.175190   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.175200   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.176009   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.176284   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.176295   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.176301   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.176307   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.178355   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.178376   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.178382   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.178387   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.178393   29302 round_trippers.go:580]     Audit-Id: 8172c157-f43e-42e0-b3a6-8cbd28c89432
	I0914 19:06:13.178401   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.178409   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.178417   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.178832   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.179275   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.179292   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.179302   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.179309   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.180983   29302 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0914 19:06:13.180994   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.180999   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.181004   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.181009   29302 round_trippers.go:580]     Audit-Id: 7d797daa-6bd3-4f35-8046-01886aa5fa4e
	I0914 19:06:13.181014   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.181019   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.181024   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.181219   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:13.682300   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:13.682333   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.682342   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.682347   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.685143   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.685160   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.685166   29302 round_trippers.go:580]     Audit-Id: 0910f73d-781a-443b-b8e1-0d453e50ba92
	I0914 19:06:13.685172   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.685177   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.685182   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.685187   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.685192   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.685503   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:13.685920   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:13.685934   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:13.685941   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:13.685947   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:13.688227   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:13.688240   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:13.688246   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:13.688252   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:13.688260   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:13.688268   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:13.688281   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:13 GMT
	I0914 19:06:13.688288   29302 round_trippers.go:580]     Audit-Id: 078b7d2a-29bc-4729-9a02-7236c4049ad7
	I0914 19:06:13.688474   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.182102   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.182125   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.182133   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.182140   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.187517   29302 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 19:06:14.187544   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.187554   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.187562   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.187569   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.187577   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.187586   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.187594   29302 round_trippers.go:580]     Audit-Id: dd780464-2280-4b93-b398-b175b603d0fe
	I0914 19:06:14.188035   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"785","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6295 chars]
	I0914 19:06:14.188554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.188572   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.188583   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.188592   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.190606   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.190620   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.190626   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.190632   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.190637   29302 round_trippers.go:580]     Audit-Id: 104efd51-1025-4755-af8b-f207cfcdb912
	I0914 19:06:14.190642   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.190647   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.190652   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.190979   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.682687   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-040952
	I0914 19:06:14.682711   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.682719   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.682725   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.690728   29302 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 19:06:14.690764   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.690775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.690783   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.690791   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.690799   29302 round_trippers.go:580]     Audit-Id: 4dc518a5-6cbd-4561-8ed6-e72b82b2abda
	I0914 19:06:14.690806   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.690814   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.690995   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-040952","namespace":"kube-system","uid":"69002c12-b452-4986-a79f-1d67702a52ef","resourceVersion":"887","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.14:2379","kubernetes.io/config.hash":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.mirror":"e3b502e37348f879efed937695a978a3","kubernetes.io/config.seen":"2023-09-14T19:01:40.726714562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6071 chars]
	I0914 19:06:14.691406   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.691420   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.691427   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.691433   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.697743   29302 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 19:06:14.697765   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.697774   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.697779   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.697784   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.697789   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.697794   29302 round_trippers.go:580]     Audit-Id: 07d3511e-72f3-415a-b985-0c38f9c2dc48
	I0914 19:06:14.697799   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.698080   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.698416   29302 pod_ready.go:92] pod "etcd-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.698432   29302 pod_ready.go:81] duration metric: took 1.529723471s waiting for pod "etcd-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698448   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.698508   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-040952
	I0914 19:06:14.698517   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.698524   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.698530   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.703391   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:14.703406   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.703412   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.703418   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.703423   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.703428   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.703433   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.703439   29302 round_trippers.go:580]     Audit-Id: 0b9ff4df-c192-426d-837d-19a8ddc6d994
	I0914 19:06:14.703718   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-040952","namespace":"kube-system","uid":"10fd42d2-c2af-48e4-8724-c8ffe95daa20","resourceVersion":"871","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.14:8443","kubernetes.io/config.hash":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.mirror":"8756931ebb3ad632d1fa90a79d546b12","kubernetes.io/config.seen":"2023-09-14T19:01:40.726715710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7606 chars]
	I0914 19:06:14.704127   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.704140   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.704147   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.704153   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.706425   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.706444   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.706451   29302 round_trippers.go:580]     Audit-Id: 6eee19bb-2b91-4350-b2ae-7edfbd41930d
	I0914 19:06:14.706457   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.706462   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.706467   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.706472   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.706478   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.706615   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.706908   29302 pod_ready.go:92] pod "kube-apiserver-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.706921   29302 pod_ready.go:81] duration metric: took 8.465952ms waiting for pod "kube-apiserver-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706930   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.706986   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-040952
	I0914 19:06:14.706996   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.707007   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.707017   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.710085   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:14.710105   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.710115   29302 round_trippers.go:580]     Audit-Id: 37a4af49-de22-42c5-8342-96bdccfba829
	I0914 19:06:14.710126   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.710135   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.710143   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.710152   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.710160   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.710726   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-040952","namespace":"kube-system","uid":"a3657cb3-c202-4067-83e1-e015b97f23c7","resourceVersion":"884","creationTimestamp":"2023-09-14T19:01:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.mirror":"eae1e4ee5d796cbce52373fd75c02fd6","kubernetes.io/config.seen":"2023-09-14T19:01:40.726708753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7174 chars]
	I0914 19:06:14.830503   29302 request.go:629] Waited for 119.282235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830554   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:14.830558   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:14.830566   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:14.830572   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:14.833064   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:14.833083   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:14.833090   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:14.833095   29302 round_trippers.go:580]     Audit-Id: 7a8584d4-7b4d-4f0c-a673-2711303dfb2c
	I0914 19:06:14.833100   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:14.833106   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:14.833110   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:14.833116   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:14.833241   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:14.833562   29302 pod_ready.go:92] pod "kube-controller-manager-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:14.833577   29302 pod_ready.go:81] duration metric: took 126.641384ms waiting for pod "kube-controller-manager-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:14.833587   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.030888   29302 request.go:629] Waited for 197.237265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030946   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gldkh
	I0914 19:06:15.030951   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.030960   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.030966   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.034339   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.034359   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.034366   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.034374   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.034386   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.034394   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.034408   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:14 GMT
	I0914 19:06:15.034416   29302 round_trippers.go:580]     Audit-Id: 3c39cfc6-1f06-4726-9679-50e437a9b84d
	I0914 19:06:15.034690   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gldkh","generateName":"kube-proxy-","namespace":"kube-system","uid":"55ba7c02-d066-4399-a622-621499fbc662","resourceVersion":"541","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0914 19:06:15.230480   29302 request.go:629] Waited for 195.333524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230552   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m02
	I0914 19:06:15.230557   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.230565   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.230574   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.234304   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.234329   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.234339   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.234347   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.234359   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.234366   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.234377   29302 round_trippers.go:580]     Audit-Id: 4a324e73-8fa1-482f-bde6-ae80be99f721
	I0914 19:06:15.234386   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.234528   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m02","uid":"26bddb4d-d211-4e3d-a188-317e100d2aa5","resourceVersion":"608","creationTimestamp":"2023-09-14T19:02:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:02:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0914 19:06:15.234774   29302 pod_ready.go:92] pod "kube-proxy-gldkh" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.234787   29302 pod_ready.go:81] duration metric: took 401.195035ms waiting for pod "kube-proxy-gldkh" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.234796   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.430003   29302 request.go:629] Waited for 195.152769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430096   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gpl2p
	I0914 19:06:15.430104   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.430118   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.430142   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.433237   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.433271   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.433281   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.433290   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.433300   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.433309   29302 round_trippers.go:580]     Audit-Id: 92d372f9-e9c9-4d13-8b75-1b3ebd7f2435
	I0914 19:06:15.433321   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.433329   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.433627   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gpl2p","generateName":"kube-proxy-","namespace":"kube-system","uid":"4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f","resourceVersion":"761","creationTimestamp":"2023-09-14T19:03:50Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:03:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I0914 19:06:15.630434   29302 request.go:629] Waited for 196.369841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630534   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952-m03
	I0914 19:06:15.630546   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.630557   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.630568   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.633799   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.633824   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.633834   29302 round_trippers.go:580]     Audit-Id: 8ea32575-14e9-412a-ba38-fd00269447f5
	I0914 19:06:15.633844   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.633852   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.633864   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.633873   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.633887   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.634144   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952-m03","uid":"28b45907-e363-4b10-afa7-ecf3cea247b8","resourceVersion":"891","creationTimestamp":"2023-09-14T19:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3084 chars]
	I0914 19:06:15.634401   29302 pod_ready.go:92] pod "kube-proxy-gpl2p" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:15.634416   29302 pod_ready.go:81] duration metric: took 399.614214ms waiting for pod "kube-proxy-gpl2p" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.634430   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:15.830846   29302 request.go:629] Waited for 196.353294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830928   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbsmt
	I0914 19:06:15.830933   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:15.830945   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:15.830952   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:15.834221   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:15.834246   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:15.834259   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:15.834267   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:15.834274   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:15 GMT
	I0914 19:06:15.834282   29302 round_trippers.go:580]     Audit-Id: 44182567-ce38-4fce-a842-f78410d89ee9
	I0914 19:06:15.834289   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:15.834298   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:15.834802   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbsmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d68fe199-9969-47a9-95a1-04e766c5dbaa","resourceVersion":"798","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b81636f3-a9be-4290-be24-324c7fac8ce6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b81636f3-a9be-4290-be24-324c7fac8ce6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5738 chars]
	I0914 19:06:16.030675   29302 request.go:629] Waited for 195.45562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030731   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.030736   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.030743   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.030750   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.034236   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.034260   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.034267   29302 round_trippers.go:580]     Audit-Id: e468604d-7ce9-469a-b812-ed3c9c650d6e
	I0914 19:06:16.034275   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.034281   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.034286   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.034291   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.034297   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.034614   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.034941   29302 pod_ready.go:92] pod "kube-proxy-hbsmt" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:16.034956   29302 pod_ready.go:81] duration metric: took 400.519289ms waiting for pod "kube-proxy-hbsmt" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.034964   29302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:16.230342   29302 request.go:629] Waited for 195.324407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230449   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.230454   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.230462   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.230470   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.233547   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:16.233564   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.233572   29302 round_trippers.go:580]     Audit-Id: 224fde99-6866-4d6c-81fe-2f97bc0c6734
	I0914 19:06:16.233577   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.233587   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.233592   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.233597   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.233602   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.233823   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.430509   29302 request.go:629] Waited for 196.339279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430573   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.430580   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.430590   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.430600   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.433517   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.433535   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.433542   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.433559   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.433565   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.433571   29302 round_trippers.go:580]     Audit-Id: 1da1d693-84a7-4480-b07f-7a386588f044
	I0914 19:06:16.433576   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.433581   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.433983   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:16.630679   29302 request.go:629] Waited for 196.348452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630764   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:16.630769   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.630776   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.630783   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.633557   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.633575   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.633582   29302 round_trippers.go:580]     Audit-Id: 2136e32a-148d-4e1d-825d-95e56e17f7f3
	I0914 19:06:16.633589   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.633597   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.633605   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.633612   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.633629   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.634402   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:16.830072   29302 request.go:629] Waited for 195.313935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830145   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:16.830152   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:16.830160   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:16.830168   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:16.832962   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:16.832981   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:16.832988   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:16.832993   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:16.832998   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:16.833006   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:16.833011   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:16 GMT
	I0914 19:06:16.833016   29302 round_trippers.go:580]     Audit-Id: 685468aa-007f-4cd0-908f-286f4b9b8738
	I0914 19:06:16.833566   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.334599   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.334622   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.334645   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.334652   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.337790   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:17.337810   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.337817   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.337823   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.337828   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.337835   29302 round_trippers.go:580]     Audit-Id: 13885e51-e7a2-41bd-a4e6-27c1810b7f5b
	I0914 19:06:17.337843   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.337850   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.338071   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.338439   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.338455   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.338465   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.338474   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.340824   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.340837   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.340843   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.340848   29302 round_trippers.go:580]     Audit-Id: e2df7950-3f43-43ac-a2ff-9ebcb6aba048
	I0914 19:06:17.340854   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.340862   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.340871   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.340883   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.341277   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:17.834981   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:17.835006   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.835015   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.835021   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.837948   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.837973   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.837984   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.837992   29302 round_trippers.go:580]     Audit-Id: bf96bd3c-445d-4267-b684-9a852b7ce0ca
	I0914 19:06:17.838000   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.838008   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.838020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.838027   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.838816   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"784","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5148 chars]
	I0914 19:06:17.839223   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:17.839236   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:17.839244   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:17.839250   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:17.842020   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:17.842042   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:17.842052   29302 round_trippers.go:580]     Audit-Id: 58f6c61f-2107-4d49-bc25-beaf577ebc0b
	I0914 19:06:17.842063   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:17.842073   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:17.842084   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:17.842094   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:17.842104   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:17 GMT
	I0914 19:06:17.842191   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.334912   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-040952
	I0914 19:06:18.334936   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.334944   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.334950   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.337727   29302 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 19:06:18.337753   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.337763   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.337772   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.337784   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.337793   29302 round_trippers.go:580]     Audit-Id: 91452a7a-9433-48f7-bb48-08448530a97b
	I0914 19:06:18.337804   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.337811   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.338243   29302 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-040952","namespace":"kube-system","uid":"386eb63c-5554-4ab9-8241-b096f390ee9c","resourceVersion":"894","creationTimestamp":"2023-09-14T19:01:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.mirror":"f83b231eda73d0afcf9cdab17117c7e6","kubernetes.io/config.seen":"2023-09-14T19:01:32.411176140Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4904 chars]
	I0914 19:06:18.338636   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes/multinode-040952
	I0914 19:06:18.338654   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.338664   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.338674   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.342026   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.342059   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.342068   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.342078   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.342085   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.342096   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.342104   29302 round_trippers.go:580]     Audit-Id: a5dad678-33fe-4c2f-a5f5-c10a6380266e
	I0914 19:06:18.342118   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.342444   29302 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-14T19:01:37Z","fieldsType":"FieldsV1","fi [truncated 5156 chars]
	I0914 19:06:18.342720   29302 pod_ready.go:92] pod "kube-scheduler-multinode-040952" in "kube-system" namespace has status "Ready":"True"
	I0914 19:06:18.342732   29302 pod_ready.go:81] duration metric: took 2.30776305s waiting for pod "kube-scheduler-multinode-040952" in "kube-system" namespace to be "Ready" ...
	I0914 19:06:18.342741   29302 pod_ready.go:38] duration metric: took 8.201906021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 19:06:18.342758   29302 api_server.go:52] waiting for apiserver process to appear ...
	I0914 19:06:18.342802   29302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:06:18.356335   29302 command_runner.go:130] > 1693
	I0914 19:06:18.356824   29302 api_server.go:72] duration metric: took 11.093271286s to wait for apiserver process to appear ...
	I0914 19:06:18.356842   29302 api_server.go:88] waiting for apiserver healthz status ...
	I0914 19:06:18.356862   29302 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:06:18.362653   29302 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:06:18.362710   29302 round_trippers.go:463] GET https://192.168.39.14:8443/version
	I0914 19:06:18.362717   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.362725   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.362731   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.363650   29302 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0914 19:06:18.363667   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.363677   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.363686   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.363694   29302 round_trippers.go:580]     Content-Length: 263
	I0914 19:06:18.363711   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.363719   29302 round_trippers.go:580]     Audit-Id: 01d336c4-24b2-4b6e-a634-c932a4f80f56
	I0914 19:06:18.363728   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.363733   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.363748   29302 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0914 19:06:18.363790   29302 api_server.go:141] control plane version: v1.28.1
	I0914 19:06:18.363805   29302 api_server.go:131] duration metric: took 6.957442ms to wait for apiserver health ...
	I0914 19:06:18.363814   29302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 19:06:18.363875   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.363883   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.363889   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.363900   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.367955   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.367989   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.367997   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.368005   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.368013   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.368025   29302 round_trippers.go:580]     Audit-Id: 4a4def47-e1cc-4f97-a173-69327418d154
	I0914 19:06:18.368035   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.368044   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.369884   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.373265   29302 system_pods.go:59] 12 kube-system pods found
	I0914 19:06:18.373287   29302 system_pods.go:61] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.373292   29302 system_pods.go:61] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.373296   29302 system_pods.go:61] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.373299   29302 system_pods.go:61] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.373303   29302 system_pods.go:61] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.373307   29302 system_pods.go:61] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.373312   29302 system_pods.go:61] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.373315   29302 system_pods.go:61] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.373326   29302 system_pods.go:61] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.373335   29302 system_pods.go:61] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.373339   29302 system_pods.go:61] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.373342   29302 system_pods.go:61] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.373347   29302 system_pods.go:74] duration metric: took 9.528517ms to wait for pod list to return data ...
	I0914 19:06:18.373355   29302 default_sa.go:34] waiting for default service account to be created ...
	I0914 19:06:18.430623   29302 request.go:629] Waited for 57.191118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430678   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/default/serviceaccounts
	I0914 19:06:18.430682   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.430689   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.430695   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.433750   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.433768   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.433775   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.433780   29302 round_trippers.go:580]     Content-Length: 261
	I0914 19:06:18.433785   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.433790   29302 round_trippers.go:580]     Audit-Id: f58f454f-de35-4fde-b782-3e31600d0a05
	I0914 19:06:18.433795   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.433803   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.433808   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.433825   29302 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"751abfd7-43aa-4bf5-a223-71659884f01c","resourceVersion":"335","creationTimestamp":"2023-09-14T19:01:53Z"}}]}
	I0914 19:06:18.433967   29302 default_sa.go:45] found service account: "default"
	I0914 19:06:18.433981   29302 default_sa.go:55] duration metric: took 60.621039ms for default service account to be created ...
	I0914 19:06:18.433987   29302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 19:06:18.630408   29302 request.go:629] Waited for 196.359387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630467   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/namespaces/kube-system/pods
	I0914 19:06:18.630472   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.630480   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.630486   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.635088   29302 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 19:06:18.635116   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.635126   29302 round_trippers.go:580]     Audit-Id: 40dbf5e6-bdfd-4c25-924c-528834eef0a7
	I0914 19:06:18.635135   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.635142   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.635150   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.635159   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.635173   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.636346   29302 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qrv2r","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f9293d00-1000-4ffa-b978-d08c00eee7e7","resourceVersion":"882","creationTimestamp":"2023-09-14T19:01:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a417bd90-4dd6-4366-ab94-72a881a43225","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-14T19:01:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a417bd90-4dd6-4366-ab94-72a881a43225\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82928 chars]
	I0914 19:06:18.639989   29302 system_pods.go:86] 12 kube-system pods found
	I0914 19:06:18.640017   29302 system_pods.go:89] "coredns-5dd5756b68-qrv2r" [f9293d00-1000-4ffa-b978-d08c00eee7e7] Running
	I0914 19:06:18.640024   29302 system_pods.go:89] "etcd-multinode-040952" [69002c12-b452-4986-a79f-1d67702a52ef] Running
	I0914 19:06:18.640031   29302 system_pods.go:89] "kindnet-hvz8s" [38b5564e-8c11-45e8-9751-bcaa4258a342] Running
	I0914 19:06:18.640037   29302 system_pods.go:89] "kindnet-lrkhw" [9861f216-97e0-4761-9531-cb34e8855913] Running
	I0914 19:06:18.640043   29302 system_pods.go:89] "kindnet-pjfsc" [7716e479-4492-439b-9bdf-077a541dc949] Running
	I0914 19:06:18.640050   29302 system_pods.go:89] "kube-apiserver-multinode-040952" [10fd42d2-c2af-48e4-8724-c8ffe95daa20] Running
	I0914 19:06:18.640058   29302 system_pods.go:89] "kube-controller-manager-multinode-040952" [a3657cb3-c202-4067-83e1-e015b97f23c7] Running
	I0914 19:06:18.640064   29302 system_pods.go:89] "kube-proxy-gldkh" [55ba7c02-d066-4399-a622-621499fbc662] Running
	I0914 19:06:18.640071   29302 system_pods.go:89] "kube-proxy-gpl2p" [4e6ab5b8-53fa-4e56-b534-e130dc2b3c0f] Running
	I0914 19:06:18.640080   29302 system_pods.go:89] "kube-proxy-hbsmt" [d68fe199-9969-47a9-95a1-04e766c5dbaa] Running
	I0914 19:06:18.640088   29302 system_pods.go:89] "kube-scheduler-multinode-040952" [386eb63c-5554-4ab9-8241-b096f390ee9c] Running
	I0914 19:06:18.640095   29302 system_pods.go:89] "storage-provisioner" [8f25fe5b-237f-415a-baca-e4342106bb4d] Running
	I0914 19:06:18.640110   29302 system_pods.go:126] duration metric: took 206.118337ms to wait for k8s-apps to be running ...
	I0914 19:06:18.640118   29302 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 19:06:18.640169   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:06:18.654395   29302 system_svc.go:56] duration metric: took 14.272365ms WaitForService to wait for kubelet.
	I0914 19:06:18.654416   29302 kubeadm.go:581] duration metric: took 11.390867757s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0914 19:06:18.654443   29302 node_conditions.go:102] verifying NodePressure condition ...
	I0914 19:06:18.830833   29302 request.go:629] Waited for 176.33044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830908   29302 round_trippers.go:463] GET https://192.168.39.14:8443/api/v1/nodes
	I0914 19:06:18.830915   29302 round_trippers.go:469] Request Headers:
	I0914 19:06:18.830925   29302 round_trippers.go:473]     Accept: application/json, */*
	I0914 19:06:18.830934   29302 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0914 19:06:18.833992   29302 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 19:06:18.834011   29302 round_trippers.go:577] Response Headers:
	I0914 19:06:18.834020   29302 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1ba9a6a1-5579-4925-bbf8-58d986ec527c
	I0914 19:06:18.834029   29302 round_trippers.go:580]     Date: Thu, 14 Sep 2023 19:06:18 GMT
	I0914 19:06:18.834038   29302 round_trippers.go:580]     Audit-Id: 78eec727-aee2-400e-8c95-4146a9496a91
	I0914 19:06:18.834047   29302 round_trippers.go:580]     Cache-Control: no-cache, private
	I0914 19:06:18.834056   29302 round_trippers.go:580]     Content-Type: application/json
	I0914 19:06:18.834064   29302 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4877f823-ed38-4139-b123-d7e2e11eb85c
	I0914 19:06:18.834284   29302 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"894"},"items":[{"metadata":{"name":"multinode-040952","uid":"01eeb412-8373-41b9-a9dd-3c29107a9de9","resourceVersion":"868","creationTimestamp":"2023-09-14T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-040952","kubernetes.io/os":"linux","minikube.k8s.io/commit":"677eba4579c03f097a5d68f80823c59a8add4a3b","minikube.k8s.io/name":"multinode-040952","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_14T19_01_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13543 chars]
	I0914 19:06:18.835016   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835038   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835048   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835052   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835058   29302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0914 19:06:18.835067   29302 node_conditions.go:123] node cpu capacity is 2
	I0914 19:06:18.835073   29302 node_conditions.go:105] duration metric: took 180.624501ms to run NodePressure ...
	I0914 19:06:18.835093   29302 start.go:228] waiting for startup goroutines ...
	I0914 19:06:18.835102   29302 start.go:233] waiting for cluster config update ...
	I0914 19:06:18.835115   29302 start.go:242] writing updated cluster config ...
	I0914 19:06:18.835683   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:18.835796   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.838910   29302 out.go:177] * Starting worker node multinode-040952-m02 in cluster multinode-040952
	I0914 19:06:18.840147   29302 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 19:06:18.840163   29302 cache.go:57] Caching tarball of preloaded images
	I0914 19:06:18.840249   29302 preload.go:174] Found /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0914 19:06:18.840261   29302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0914 19:06:18.840334   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:18.840476   29302 start.go:365] acquiring machines lock for multinode-040952-m02: {Name:mk07a05e24a79016fc0a298412b40eb87df032d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0914 19:06:18.840512   29302 start.go:369] acquired machines lock for "multinode-040952-m02" in 19.707µs
	I0914 19:06:18.840566   29302 start.go:96] Skipping create...Using existing machine configuration
	I0914 19:06:18.840575   29302 fix.go:54] fixHost starting: m02
	I0914 19:06:18.840830   29302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:06:18.840857   29302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:06:18.855469   29302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0914 19:06:18.855890   29302 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:06:18.856329   29302 main.go:141] libmachine: Using API Version  1
	I0914 19:06:18.856352   29302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:06:18.856677   29302 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:06:18.856891   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:18.857065   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:06:18.858712   29302 fix.go:102] recreateIfNeeded on multinode-040952-m02: state=Stopped err=<nil>
	I0914 19:06:18.858735   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	W0914 19:06:18.858914   29302 fix.go:128] unexpected machine state, will restart: <nil>
	I0914 19:06:18.861118   29302 out.go:177] * Restarting existing kvm2 VM for "multinode-040952-m02" ...
	I0914 19:06:18.862649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .Start
	I0914 19:06:18.862832   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring networks are active...
	I0914 19:06:18.863554   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network default is active
	I0914 19:06:18.863887   29302 main.go:141] libmachine: (multinode-040952-m02) Ensuring network mk-multinode-040952 is active
	I0914 19:06:18.864247   29302 main.go:141] libmachine: (multinode-040952-m02) Getting domain xml...
	I0914 19:06:18.864791   29302 main.go:141] libmachine: (multinode-040952-m02) Creating domain...
	I0914 19:06:20.114677   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting to get IP...
	I0914 19:06:20.115697   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.116116   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.116177   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.116093   29537 retry.go:31] will retry after 292.793167ms: waiting for machine to come up
	I0914 19:06:20.410624   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.411041   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.411062   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.411011   29537 retry.go:31] will retry after 329.185161ms: waiting for machine to come up
	I0914 19:06:20.741486   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:20.741956   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:20.741984   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:20.741922   29537 retry.go:31] will retry after 372.179082ms: waiting for machine to come up
	I0914 19:06:21.115108   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.115492   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.115522   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.115446   29537 retry.go:31] will retry after 552.546331ms: waiting for machine to come up
	I0914 19:06:21.669165   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:21.669673   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:21.669702   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:21.669630   29537 retry.go:31] will retry after 641.98724ms: waiting for machine to come up
	I0914 19:06:22.313770   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:22.314305   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:22.314344   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:22.314258   29537 retry.go:31] will retry after 792.672163ms: waiting for machine to come up
	I0914 19:06:23.108201   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.108628   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.108656   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.108582   29537 retry.go:31] will retry after 820.609535ms: waiting for machine to come up
	I0914 19:06:23.930887   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:23.931350   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:23.931383   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:23.931293   29537 retry.go:31] will retry after 933.919914ms: waiting for machine to come up
	I0914 19:06:24.866306   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:24.866762   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:24.866796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:24.866720   29537 retry.go:31] will retry after 1.175445783s: waiting for machine to come up
	I0914 19:06:26.044181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:26.044639   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:26.044674   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:26.044595   29537 retry.go:31] will retry after 1.659114662s: waiting for machine to come up
	I0914 19:06:27.705347   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:27.705796   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:27.705832   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:27.705738   29537 retry.go:31] will retry after 2.838813162s: waiting for machine to come up
	I0914 19:06:30.546592   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:30.547049   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:30.547092   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:30.547042   29537 retry.go:31] will retry after 2.43743272s: waiting for machine to come up
	I0914 19:06:32.987818   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:32.988277   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | unable to find current IP address of domain multinode-040952-m02 in network mk-multinode-040952
	I0914 19:06:32.988300   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | I0914 19:06:32.988246   29537 retry.go:31] will retry after 4.479558003s: waiting for machine to come up
	I0914 19:06:37.471961   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472352   29302 main.go:141] libmachine: (multinode-040952-m02) Found IP for machine: 192.168.39.16
	I0914 19:06:37.472379   29302 main.go:141] libmachine: (multinode-040952-m02) Reserving static IP address...
	I0914 19:06:37.472392   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has current primary IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.472813   29302 main.go:141] libmachine: (multinode-040952-m02) Reserved static IP address: 192.168.39.16
	I0914 19:06:37.472867   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.472882   29302 main.go:141] libmachine: (multinode-040952-m02) Waiting for SSH to be available...
	I0914 19:06:37.472912   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | skip adding static IP to network mk-multinode-040952 - found existing host DHCP lease matching {name: "multinode-040952-m02", mac: "52:54:00:2e:0b:03", ip: "192.168.39.16"}
	I0914 19:06:37.472930   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Getting to WaitForSSH function...
	I0914 19:06:37.474853   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475216   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.475243   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.475331   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH client type: external
	I0914 19:06:37.475371   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa (-rw-------)
	I0914 19:06:37.475423   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0914 19:06:37.475447   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | About to run SSH command:
	I0914 19:06:37.475460   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | exit 0
	I0914 19:06:37.565151   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | SSH cmd err, output: <nil>: 
	I0914 19:06:37.565511   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetConfigRaw
	I0914 19:06:37.566140   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.568703   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569097   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.569132   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.569351   29302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/multinode-040952/config.json ...
	I0914 19:06:37.569551   29302 machine.go:88] provisioning docker machine ...
	I0914 19:06:37.569568   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:37.569768   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.569927   29302 buildroot.go:166] provisioning hostname "multinode-040952-m02"
	I0914 19:06:37.569954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.570118   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.572245   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572611   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.572640   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.572754   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.572896   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573067   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.573182   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.573336   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.573757   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.573780   29302 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-040952-m02 && echo "multinode-040952-m02" | sudo tee /etc/hostname
	I0914 19:06:37.710270   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-040952-m02
	
	I0914 19:06:37.710294   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.712933   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713287   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.713322   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.713438   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.713649   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713830   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.713965   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.714153   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:37.714540   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:37.714569   29302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-040952-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-040952-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-040952-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 19:06:37.850271   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 19:06:37.850302   29302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17217-7285/.minikube CaCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17217-7285/.minikube}
	I0914 19:06:37.850321   29302 buildroot.go:174] setting up certificates
	I0914 19:06:37.850331   29302 provision.go:83] configureAuth start
	I0914 19:06:37.850343   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetMachineName
	I0914 19:06:37.850630   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:37.853071   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853477   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.853512   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.853665   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.855889   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856295   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.856327   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.856394   29302 provision.go:138] copyHostCerts
	I0914 19:06:37.856430   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856463   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem, removing ...
	I0914 19:06:37.856473   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem
	I0914 19:06:37.856544   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/ca.pem (1082 bytes)
	I0914 19:06:37.856653   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856672   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem, removing ...
	I0914 19:06:37.856676   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem
	I0914 19:06:37.856699   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/cert.pem (1123 bytes)
	I0914 19:06:37.856741   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856756   29302 exec_runner.go:144] found /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem, removing ...
	I0914 19:06:37.856762   29302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem
	I0914 19:06:37.856781   29302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17217-7285/.minikube/key.pem (1679 bytes)
	I0914 19:06:37.856823   29302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca-key.pem org=jenkins.multinode-040952-m02 san=[192.168.39.16 192.168.39.16 localhost 127.0.0.1 minikube multinode-040952-m02]
	I0914 19:06:37.904344   29302 provision.go:172] copyRemoteCerts
	I0914 19:06:37.904397   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 19:06:37.904417   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:37.906652   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.906972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:37.907008   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:37.907156   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:37.907312   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:37.907470   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:37.907613   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:38.000649   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 19:06:38.000741   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 19:06:38.025953   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 19:06:38.026028   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0914 19:06:38.048996   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 19:06:38.049067   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 19:06:38.072478   29302 provision.go:86] duration metric: configureAuth took 222.133675ms
	I0914 19:06:38.072507   29302 buildroot.go:189] setting minikube options for container-runtime
	I0914 19:06:38.072712   29302 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:06:38.072733   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:38.072954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.075633   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.075959   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.076005   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.076116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.076304   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076482   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.076626   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.076778   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.077069   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.077082   29302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 19:06:38.199048   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0914 19:06:38.199074   29302 buildroot.go:70] root file system type: tmpfs
	I0914 19:06:38.199195   29302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 19:06:38.199220   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.201601   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.201971   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.201992   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.202160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.202374   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202529   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.202642   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.202785   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.203087   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.203150   29302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 19:06:38.339052   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 19:06:38.339081   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:38.341807   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342226   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:38.342261   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:38.342430   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:38.342621   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342798   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:38.342954   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:38.343119   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:38.343432   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:38.343461   29302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 19:06:39.223778   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0914 19:06:39.223805   29302 machine.go:91] provisioned docker machine in 1.654241082s
	I0914 19:06:39.223818   29302 start.go:300] post-start starting for "multinode-040952-m02" (driver="kvm2")
	I0914 19:06:39.223828   29302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 19:06:39.223843   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.224176   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 19:06:39.224211   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.226901   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227247   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.227280   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.227544   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.227745   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.227911   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.228053   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.321534   29302 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 19:06:39.325932   29302 command_runner.go:130] > NAME=Buildroot
	I0914 19:06:39.325948   29302 command_runner.go:130] > VERSION=2021.02.12-1-gaa3debf-dirty
	I0914 19:06:39.325957   29302 command_runner.go:130] > ID=buildroot
	I0914 19:06:39.325962   29302 command_runner.go:130] > VERSION_ID=2021.02.12
	I0914 19:06:39.325972   29302 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0914 19:06:39.326365   29302 info.go:137] Remote host: Buildroot 2021.02.12
	I0914 19:06:39.326381   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/addons for local assets ...
	I0914 19:06:39.326432   29302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17217-7285/.minikube/files for local assets ...
	I0914 19:06:39.326501   29302 filesync.go:149] local asset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> 145062.pem in /etc/ssl/certs
	I0914 19:06:39.326513   29302 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem -> /etc/ssl/certs/145062.pem
	I0914 19:06:39.326584   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 19:06:39.336967   29302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/ssl/certs/145062.pem --> /etc/ssl/certs/145062.pem (1708 bytes)
	I0914 19:06:39.360557   29302 start.go:303] post-start completed in 136.725285ms
	I0914 19:06:39.360581   29302 fix.go:56] fixHost completed within 20.520003113s
	I0914 19:06:39.360605   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.362948   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363269   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.363315   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.363388   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.363595   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363783   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.363936   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.364099   29302 main.go:141] libmachine: Using SSH client type: native
	I0914 19:06:39.364460   29302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0914 19:06:39.364472   29302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0914 19:06:39.486077   29302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694718399.434257584
	
	I0914 19:06:39.486101   29302 fix.go:206] guest clock: 1694718399.434257584
	I0914 19:06:39.486110   29302 fix.go:219] Guest: 2023-09-14 19:06:39.434257584 +0000 UTC Remote: 2023-09-14 19:06:39.360584834 +0000 UTC m=+78.429360914 (delta=73.67275ms)
	I0914 19:06:39.486128   29302 fix.go:190] guest clock delta is within tolerance: 73.67275ms
	I0914 19:06:39.486135   29302 start.go:83] releasing machines lock for "multinode-040952-m02", held for 20.645613984s
	I0914 19:06:39.486160   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.486442   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:06:39.488972   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.489301   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.489321   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.491933   29302 out.go:177] * Found network options:
	I0914 19:06:39.493577   29302 out.go:177]   - NO_PROXY=192.168.39.14
	W0914 19:06:39.495217   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.495254   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495809   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.495995   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:06:39.496072   29302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 19:06:39.496116   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	W0914 19:06:39.496205   29302 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 19:06:39.496278   29302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 19:06:39.496299   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:06:39.498773   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.498969   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499150   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499181   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499303   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499318   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:06:31 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:06:39.499348   29302 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:06:39.499474   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499542   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:06:39.499625   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499690   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:06:39.499747   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.499829   29302 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:06:39.499990   29302 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:06:39.587315   29302 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0914 19:06:39.587941   29302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0914 19:06:39.588006   29302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 19:06:39.610801   29302 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0914 19:06:39.610851   29302 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0914 19:06:39.610876   29302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0914 19:06:39.610891   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.610989   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:39.629605   29302 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0914 19:06:39.630150   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0914 19:06:39.641201   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 19:06:39.651880   29302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 19:06:39.651937   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 19:06:39.663251   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.674202   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 19:06:39.685211   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 19:06:39.696908   29302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 19:06:39.709126   29302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 19:06:39.721014   29302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 19:06:39.731728   29302 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0914 19:06:39.731788   29302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 19:06:39.742220   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:39.854266   29302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 19:06:39.871417   29302 start.go:469] detecting cgroup driver to use...
	I0914 19:06:39.871488   29302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 19:06:39.884609   29302 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0914 19:06:39.884650   29302 command_runner.go:130] > [Unit]
	I0914 19:06:39.884657   29302 command_runner.go:130] > Description=Docker Application Container Engine
	I0914 19:06:39.884663   29302 command_runner.go:130] > Documentation=https://docs.docker.com
	I0914 19:06:39.884669   29302 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0914 19:06:39.884677   29302 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0914 19:06:39.884682   29302 command_runner.go:130] > StartLimitBurst=3
	I0914 19:06:39.884689   29302 command_runner.go:130] > StartLimitIntervalSec=60
	I0914 19:06:39.884693   29302 command_runner.go:130] > [Service]
	I0914 19:06:39.884698   29302 command_runner.go:130] > Type=notify
	I0914 19:06:39.884702   29302 command_runner.go:130] > Restart=on-failure
	I0914 19:06:39.884708   29302 command_runner.go:130] > Environment=NO_PROXY=192.168.39.14
	I0914 19:06:39.884715   29302 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0914 19:06:39.884726   29302 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0914 19:06:39.884735   29302 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0914 19:06:39.884743   29302 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0914 19:06:39.884752   29302 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0914 19:06:39.884761   29302 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0914 19:06:39.884768   29302 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0914 19:06:39.884787   29302 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0914 19:06:39.884796   29302 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0914 19:06:39.884802   29302 command_runner.go:130] > ExecStart=
	I0914 19:06:39.884821   29302 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0914 19:06:39.884831   29302 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0914 19:06:39.884838   29302 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0914 19:06:39.884845   29302 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0914 19:06:39.884852   29302 command_runner.go:130] > LimitNOFILE=infinity
	I0914 19:06:39.884856   29302 command_runner.go:130] > LimitNPROC=infinity
	I0914 19:06:39.884862   29302 command_runner.go:130] > LimitCORE=infinity
	I0914 19:06:39.884867   29302 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0914 19:06:39.884875   29302 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0914 19:06:39.884879   29302 command_runner.go:130] > TasksMax=infinity
	I0914 19:06:39.884888   29302 command_runner.go:130] > TimeoutStartSec=0
	I0914 19:06:39.884894   29302 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0914 19:06:39.884898   29302 command_runner.go:130] > Delegate=yes
	I0914 19:06:39.884905   29302 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0914 19:06:39.884917   29302 command_runner.go:130] > KillMode=process
	I0914 19:06:39.884923   29302 command_runner.go:130] > [Install]
	I0914 19:06:39.884929   29302 command_runner.go:130] > WantedBy=multi-user.target
	I0914 19:06:39.885921   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.902340   29302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 19:06:39.919241   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 19:06:39.931882   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.944141   29302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 19:06:39.980328   29302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 19:06:39.993054   29302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 19:06:40.010119   29302 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0914 19:06:40.010413   29302 ssh_runner.go:195] Run: which cri-dockerd
	I0914 19:06:40.014171   29302 command_runner.go:130] > /usr/bin/cri-dockerd
	I0914 19:06:40.014287   29302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 19:06:40.024688   29302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0914 19:06:40.042167   29302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 19:06:40.160404   29302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 19:06:40.272827   29302 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 19:06:40.272855   29302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0914 19:06:40.289795   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:40.398781   29302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 19:06:41.803191   29302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.40437357s)
	I0914 19:06:41.803251   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:41.905435   29302 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 19:06:42.032291   29302 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 19:06:42.160622   29302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 19:06:42.277173   29302 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 19:06:42.292786   29302 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I0914 19:06:42.294889   29302 out.go:177] 
	W0914 19:06:42.296193   29302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0914 19:06:42.296210   29302 out.go:239] * 
	W0914 19:06:42.297001   29302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 19:06:42.298210   29302 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:46 UTC. --
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110721289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110740258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:07 multinode-040952 dockerd[833]: time="2023-09-14T19:06:07.110748982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.560125431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561439001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.561948132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.562497172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912088487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912140403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912165447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:10 multinode-040952 dockerd[833]: time="2023-09-14T19:06:10.912176351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c5adb06ad8644fdaa00404169cd62847107a188941b235afcd96bc74a471f36/resolv.conf as [nameserver 192.168.122.1]"
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248847029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248915066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248934609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.248946671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 cri-dockerd[1047]: time="2023-09-14T19:06:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746238437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746301425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746320987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 19:06:11 multinode-040952 dockerd[833]: time="2023-09-14T19:06:11.746384615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.567374268Z" level=info msg="shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.568816508Z" level=warning msg="cleaning up after shim disconnected" id=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 namespace=moby
	Sep 14 19:06:34 multinode-040952 dockerd[827]: time="2023-09-14T19:06:34.569676835Z" level=info msg="ignoring event" container=c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 19:06:34 multinode-040952 dockerd[833]: time="2023-09-14T19:06:34.570344420Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	45c401009e903       8c811b4aec35f                                                                                         35 seconds ago      Running             busybox                   1                   9b65f9b32fcb4
	d8bb85ef502bc       ead0a4a53df89                                                                                         35 seconds ago      Running             coredns                   1                   8c5adb06ad864
	b3f4888d47e37       c7d1297425461                                                                                         40 seconds ago      Running             kindnet-cni               1                   ecedcc81d5040
	c9e2f6411addd       6e38f40d628db                                                                                         42 seconds ago      Exited              storage-provisioner       1                   6517274d37d45
	9057a95faf814       6cdbabde3874e                                                                                         43 seconds ago      Running             kube-proxy                1                   baaaa29d51d71
	1c691ff0fb1dc       b462ce0c8b1ff                                                                                         47 seconds ago      Running             kube-scheduler            1                   a2717cfc7b703
	d2a4b9fbe6163       73deb9a3f7025                                                                                         48 seconds ago      Running             etcd                      1                   8003d9c05224c
	b6362a20e1ba8       5c801295c21d0                                                                                         48 seconds ago      Running             kube-apiserver            1                   d62732c77e111
	7551a7f5f8d28       821b3dfea27be                                                                                         48 seconds ago      Running             kube-controller-manager   1                   d33e8c5c8b80c
	b2201408c190d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   606d676847d38
	5ca168b256eca       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   fb2dbcea99e9f
	1dac2d18ee960       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   2c6b193d8f06a
	bd14e8416f22e       6cdbabde3874e                                                                                         4 minutes ago       Exited              kube-proxy                0                   ac89590af9af7
	e7dd2a8d2bf2a       b462ce0c8b1ff                                                                                         5 minutes ago       Exited              kube-scheduler            0                   3204588282f3d
	79de1cbad023f       73deb9a3f7025                                                                                         5 minutes ago       Exited              etcd                      0                   992d221cf3de6
	bdae306df7741       821b3dfea27be                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   c60a4b7edf2a5
	7ae1932584ffa       5c801295c21d0                                                                                         5 minutes ago       Exited              kube-apiserver            0                   bf69af78fefd5
	
	* 
	* ==> coredns [5ca168b256ec] <==
	* [INFO] 10.244.1.2:34807 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001920386s
	[INFO] 10.244.1.2:58373 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000223623s
	[INFO] 10.244.1.2:34744 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097963s
	[INFO] 10.244.1.2:42669 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00110869s
	[INFO] 10.244.1.2:49456 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084315s
	[INFO] 10.244.1.2:36531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105982s
	[INFO] 10.244.1.2:44052 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073712s
	[INFO] 10.244.0.3:53028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102025s
	[INFO] 10.244.0.3:60397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219163s
	[INFO] 10.244.0.3:58611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119555s
	[INFO] 10.244.0.3:56794 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000389586s
	[INFO] 10.244.1.2:57290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238838s
	[INFO] 10.244.1.2:38598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112648s
	[INFO] 10.244.1.2:36747 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130289s
	[INFO] 10.244.1.2:44678 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130001s
	[INFO] 10.244.0.3:56148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000416563s
	[INFO] 10.244.0.3:48925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015457s
	[INFO] 10.244.0.3:37027 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000266436s
	[INFO] 10.244.0.3:58029 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000132942s
	[INFO] 10.244.1.2:32850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167159s
	[INFO] 10.244.1.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075407s
	[INFO] 10.244.1.2:33878 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077018s
	[INFO] 10.244.1.2:33144 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000119325s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d8bb85ef502b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51360 - 19367 "HINFO IN 781133024460292738.4424492601979386444. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021489339s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-040952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-040952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=677eba4579c03f097a5d68f80823c59a8add4a3b
	                    minikube.k8s.io/name=multinode-040952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_14T19_01_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 19:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-040952
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 19:06:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:01:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 19:06:09 +0000   Thu, 14 Sep 2023 19:06:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    multinode-040952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a22e570b53364d97906f6fbadc119046
	  System UUID:                a22e570b-5336-4d97-906f-6fbadc119046
	  Boot ID:                    805cf3f0-f992-49df-b9c1-1c815bc938ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8xj5t                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-5dd5756b68-qrv2r                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m53s
	  kube-system                 etcd-multinode-040952                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 kindnet-hvz8s                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m53s
	  kube-system                 kube-apiserver-multinode-040952             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-multinode-040952    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-hbsmt                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-scheduler-multinode-040952             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  Starting                 42s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m6s                   kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m6s                   kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m6s                   kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node multinode-040952 event: Registered Node multinode-040952 in Controller
	  Normal  NodeReady                4m41s                  kubelet          Node multinode-040952 status is now: NodeReady
	  Normal  Starting                 50s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  50s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  49s (x8 over 50s)      kubelet          Node multinode-040952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 50s)      kubelet          Node multinode-040952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x7 over 50s)      kubelet          Node multinode-040952 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                    node-controller  Node multinode-040952 event: Registered Node multinode-040952 in Controller
	
	
	Name:               multinode-040952-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-040952-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Sep 2023 19:02:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-040952-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Sep 2023 19:04:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:02:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Sep 2023 19:03:27 +0000   Thu, 14 Sep 2023 19:03:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-040952-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 275cf71437384b3685d193f4ccec91cc
	  System UUID:                275cf714-3738-4b36-85d1-93f4ccec91cc
	  Boot ID:                    9d1451db-6918-461e-9cc4-16724afd48c4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-msf7r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-lrkhw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-proxy-gldkh            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x2 over 3m50s)  kubelet          Node multinode-040952-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x2 over 3m50s)  kubelet          Node multinode-040952-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x2 over 3m50s)  kubelet          Node multinode-040952-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                  node-controller  Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node multinode-040952-m02 status is now: NodeReady
	  Normal  RegisteredNode           32s                    node-controller  Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep14 19:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071026] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.320578] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.256122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.741731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615091] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.091320] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.160091] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.277522] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.106300] systemd-fstab-generator[805]: Ignoring "noauto" for root device
	[  +0.125747] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +0.569199] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.109950] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.112895] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +0.113984] systemd-fstab-generator[1025]: Ignoring "noauto" for root device
	[  +0.119773] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[ +11.953340] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +0.384554] kauditd_printk_skb: 67 callbacks suppressed
	
	* 
	* ==> etcd [79de1cbad023] <==
	* {"level":"info","ts":"2023-09-14T19:01:36.01867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 2"}
	{"level":"info","ts":"2023-09-14T19:01:36.018676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 2"}
	{"level":"info","ts":"2023-09-14T19:01:36.0202Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T19:01:36.020483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:01:36.020568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:01:36.022008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T19:01:36.022275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T19:01:36.022291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T19:01:36.022636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
	{"level":"info","ts":"2023-09-14T19:01:36.022715Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024658Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:01:36.024765Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:03:51.807588Z","caller":"traceutil/trace.go:171","msg":"trace[23883446] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"129.206345ms","start":"2023-09-14T19:03:51.678265Z","end":"2023-09-14T19:03:51.807471Z","steps":["trace[23883446] 'process raft request'  (duration: 129.086639ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-14T19:04:52.930829Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-14T19:04:52.930966Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	{"level":"warn","ts":"2023-09-14T19:04:52.931161Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.931257Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.932088Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2023-09-14T19:04:52.952017Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-14T19:04:52.952093Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.14:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-14T19:04:52.952149Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"599035dfeb7e0476","current-leader-member-id":"599035dfeb7e0476"}
	{"level":"info","ts":"2023-09-14T19:04:52.955652Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:04:52.955754Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:04:52.955763Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-040952","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"]}
	
	* 
	* ==> etcd [d2a4b9fbe616] <==
	* {"level":"info","ts":"2023-09-14T19:05:59.734271Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T19:05:59.734297Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-14T19:05:59.740699Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-14T19:05:59.743953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 switched to configuration voters=(6453717501866804342)"}
	{"level":"info","ts":"2023-09-14T19:05:59.746046Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","added-peer-id":"599035dfeb7e0476","added-peer-peer-urls":["https://192.168.39.14:2380"]}
	{"level":"info","ts":"2023-09-14T19:05:59.746423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7dcc0a60dbbc15a1","local-member-id":"599035dfeb7e0476","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:05:59.746624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-14T19:05:59.744002Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:05:59.762875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.14:2380"}
	{"level":"info","ts":"2023-09-14T19:05:59.767737Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"599035dfeb7e0476","initial-advertise-peer-urls":["https://192.168.39.14:2380"],"listen-peer-urls":["https://192.168.39.14:2380"],"advertise-client-urls":["https://192.168.39.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-14T19:05:59.767794Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-14T19:06:00.733425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.733712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.73392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgPreVoteResp from 599035dfeb7e0476 at term 2"}
	{"level":"info","ts":"2023-09-14T19:06:00.734128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became candidate at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.73421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 received MsgVoteResp from 599035dfeb7e0476 at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.734234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"599035dfeb7e0476 became leader at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.734355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 599035dfeb7e0476 elected leader 599035dfeb7e0476 at term 3"}
	{"level":"info","ts":"2023-09-14T19:06:00.738829Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"599035dfeb7e0476","local-member-attributes":"{Name:multinode-040952 ClientURLs:[https://192.168.39.14:2379]}","request-path":"/0/members/599035dfeb7e0476/attributes","cluster-id":"7dcc0a60dbbc15a1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-14T19:06:00.739125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:06:00.739447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-14T19:06:00.739493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-14T19:06:00.739514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-14T19:06:00.740785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-14T19:06:00.740794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.14:2379"}
	
	* 
	* ==> kernel <==
	*  19:06:46 up 1 min,  0 users,  load average: 1.17, 0.36, 0.12
	Linux multinode-040952 5.10.57 #1 SMP Tue Sep 12 02:34:33 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1dac2d18ee96] <==
	* I0914 19:04:13.417146       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:13.417297       1 main.go:227] handling current node
	I0914 19:04:13.417313       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:13.417322       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:13.417671       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:13.417972       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:23.424504       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:23.425037       1 main.go:227] handling current node
	I0914 19:04:23.425203       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:23.425329       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:23.425757       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:23.425805       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:33.433351       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:33.433474       1 main.go:227] handling current node
	I0914 19:04:33.433513       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:33.434156       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:33.434804       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:33.435075       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.2.0/24] 
	I0914 19:04:43.456778       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:04:43.457185       1 main.go:227] handling current node
	I0914 19:04:43.457215       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:04:43.457226       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:04:43.457383       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:04:43.457389       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:04:43.457441       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0} 
	
	* 
	* ==> kindnet [b3f4888d47e3] <==
	* I0914 19:06:08.275205       1 main.go:227] handling current node
	I0914 19:06:08.275662       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:08.275676       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:08.275797       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.16 Flags: [] Table: 0} 
	I0914 19:06:08.275887       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:08.275896       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:08.275949       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.107 Flags: [] Table: 0} 
	I0914 19:06:18.290953       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:18.290991       1 main.go:227] handling current node
	I0914 19:06:18.291009       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:18.291014       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:18.291123       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:18.291128       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:28.307114       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:28.307170       1 main.go:227] handling current node
	I0914 19:06:28.307193       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:28.307199       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:28.307346       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:28.307381       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	I0914 19:06:38.313370       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0914 19:06:38.313758       1 main.go:227] handling current node
	I0914 19:06:38.314072       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0914 19:06:38.314290       1 main.go:250] Node multinode-040952-m02 has CIDR [10.244.1.0/24] 
	I0914 19:06:38.314714       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0914 19:06:38.314906       1 main.go:250] Node multinode-040952-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [7ae1932584ff] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.925127       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.938224       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0914 19:05:02.943236       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [b6362a20e1ba] <==
	* I0914 19:06:02.103379       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0914 19:06:02.103893       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 19:06:02.103947       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 19:06:02.227119       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 19:06:02.271711       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0914 19:06:02.304807       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0914 19:06:02.304872       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 19:06:02.305849       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0914 19:06:02.305890       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0914 19:06:02.306061       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 19:06:02.331297       1 shared_informer.go:318] Caches are synced for configmaps
	I0914 19:06:02.331358       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0914 19:06:02.335150       1 aggregator.go:166] initial CRD sync complete...
	I0914 19:06:02.335193       1 autoregister_controller.go:141] Starting autoregister controller
	I0914 19:06:02.335200       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 19:06:02.335206       1 cache.go:39] Caches are synced for autoregister controller
	I0914 19:06:03.100463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 19:06:03.368706       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.14]
	I0914 19:06:03.370054       1 controller.go:624] quota admission added evaluator for: endpoints
	I0914 19:06:03.376360       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 19:06:05.169364       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0914 19:06:05.329658       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0914 19:06:05.341332       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0914 19:06:05.419400       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 19:06:05.426410       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [7551a7f5f8d2] <==
	* I0914 19:06:14.661435       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 19:06:14.661442       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 19:06:14.664911       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 19:06:14.667650       1 shared_informer.go:318] Caches are synced for job
	I0914 19:06:14.678438       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0914 19:06:14.684625       1 shared_informer.go:318] Caches are synced for endpoint
	I0914 19:06:14.710898       1 shared_informer.go:318] Caches are synced for attach detach
	I0914 19:06:14.717414       1 shared_informer.go:318] Caches are synced for daemon sets
	I0914 19:06:14.743422       1 shared_informer.go:318] Caches are synced for taint
	I0914 19:06:14.743617       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0914 19:06:14.744935       1 event.go:307] "Event occurred" object="multinode-040952" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952 event: Registered Node multinode-040952 in Controller"
	I0914 19:06:14.744976       1 event.go:307] "Event occurred" object="multinode-040952-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m02 event: Registered Node multinode-040952-m02 in Controller"
	I0914 19:06:14.744985       1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
	I0914 19:06:14.747755       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0914 19:06:14.747973       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 19:06:14.748234       1 taint_manager.go:211] "Sending events to api server"
	I0914 19:06:14.755944       1 shared_informer.go:318] Caches are synced for resource quota
	I0914 19:06:14.758787       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952"
	I0914 19:06:14.759112       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m02"
	I0914 19:06:14.759307       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
	I0914 19:06:14.761326       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0914 19:06:15.192335       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 19:06:15.196730       1 shared_informer.go:318] Caches are synced for garbage collector
	I0914 19:06:15.196764       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0914 19:06:45.068527       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	
	* 
	* ==> kube-controller-manager [bdae306df774] <==
	* I0914 19:03:11.800269       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0914 19:03:11.822032       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-msf7r"
	I0914 19:03:11.832933       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8xj5t"
	I0914 19:03:11.858800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.587243ms"
	I0914 19:03:11.881601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.674253ms"
	I0914 19:03:11.911272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.257061ms"
	I0914 19:03:11.911865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="129.703µs"
	I0914 19:03:13.323606       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-msf7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-msf7r"
	I0914 19:03:14.759110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.215323ms"
	I0914 19:03:14.759979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.128µs"
	I0914 19:03:15.674480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.700191ms"
	I0914 19:03:15.674657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.358µs"
	I0914 19:03:50.546206       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:03:50.547815       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
	I0914 19:03:50.566383       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gpl2p"
	I0914 19:03:50.573363       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pjfsc"
	I0914 19:03:50.579177       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.2.0/24"]
	I0914 19:03:53.329628       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-040952-m03"
	I0914 19:03:53.330341       1 event.go:307] "Event occurred" object="multinode-040952-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-040952-m03 event: Registered Node multinode-040952-m03 in Controller"
	I0914 19:04:06.424965       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:40.617462       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:41.474271       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	I0914 19:04:41.476212       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-040952-m03\" does not exist"
	I0914 19:04:41.488035       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-040952-m03" podCIDRs=["10.244.3.0/24"]
	I0914 19:04:49.789872       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-040952-m02"
	
	* 
	* ==> kube-proxy [9057a95faf81] <==
	* I0914 19:06:04.144375       1 server_others.go:69] "Using iptables proxy"
	I0914 19:06:04.170724       1 node.go:141] Successfully retrieved node IP: 192.168.39.14
	I0914 19:06:04.450059       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 19:06:04.450082       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 19:06:04.458361       1 server_others.go:152] "Using iptables Proxier"
	I0914 19:06:04.459621       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 19:06:04.460661       1 server.go:846] "Version info" version="v1.28.1"
	I0914 19:06:04.461096       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:06:04.466061       1 config.go:188] "Starting service config controller"
	I0914 19:06:04.466932       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 19:06:04.467389       1 config.go:97] "Starting endpoint slice config controller"
	I0914 19:06:04.467710       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 19:06:04.469390       1 config.go:315] "Starting node config controller"
	I0914 19:06:04.469898       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 19:06:04.568257       1 shared_informer.go:318] Caches are synced for service config
	I0914 19:06:04.568320       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 19:06:04.571747       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [bd14e8416f22] <==
	* I0914 19:01:54.607139       1 server_others.go:69] "Using iptables proxy"
	I0914 19:01:54.619412       1 node.go:141] Successfully retrieved node IP: 192.168.39.14
	I0914 19:01:54.687340       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0914 19:01:54.687387       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0914 19:01:54.690390       1 server_others.go:152] "Using iptables Proxier"
	I0914 19:01:54.690676       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0914 19:01:54.690863       1 server.go:846] "Version info" version="v1.28.1"
	I0914 19:01:54.690874       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:01:54.691425       1 config.go:188] "Starting service config controller"
	I0914 19:01:54.691480       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0914 19:01:54.691505       1 config.go:97] "Starting endpoint slice config controller"
	I0914 19:01:54.691634       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0914 19:01:54.693270       1 config.go:315] "Starting node config controller"
	I0914 19:01:54.693313       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0914 19:01:54.792627       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0914 19:01:54.792662       1 shared_informer.go:318] Caches are synced for service config
	I0914 19:01:54.793421       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1c691ff0fb1d] <==
	* I0914 19:06:00.284533       1 serving.go:348] Generated self-signed cert in-memory
	W0914 19:06:02.177631       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 19:06:02.177821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:06:02.178051       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 19:06:02.178277       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 19:06:02.270392       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0914 19:06:02.270853       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 19:06:02.286074       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0914 19:06:02.290157       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 19:06:02.290663       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0914 19:06:02.290679       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0914 19:06:02.393949       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e7dd2a8d2bf2] <==
	* E0914 19:01:37.477320       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:01:37.477458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 19:01:37.477507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 19:01:38.288201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 19:01:38.288230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 19:01:38.315971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 19:01:38.315998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 19:01:38.401116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 19:01:38.401259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0914 19:01:38.486649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 19:01:38.486726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0914 19:01:38.559583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 19:01:38.559638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0914 19:01:38.654661       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 19:01:38.654763       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 19:01:38.746863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 19:01:38.747118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 19:01:38.748736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 19:01:38.749082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 19:01:38.759272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 19:01:38.759300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0914 19:01:40.363415       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 19:04:52.977252       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0914 19:04:52.977363       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0914 19:04:52.977770       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-09-14 19:05:32 UTC, ends at Thu 2023-09-14 19:06:47 UTC. --
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334153    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.334219    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.334203478 +0000 UTC m=+7.832049981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435647    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435679    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:03 multinode-040952 kubelet[1290]: E0914 19:06:03.435727    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:04.435713596 +0000 UTC m=+7.933560098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343855    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.343919    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.343905485 +0000 UTC m=+9.841751999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444793    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.444924    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.445066    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:06.445023628 +0000 UTC m=+9.942870143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836832    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
	Sep 14 19:06:04 multinode-040952 kubelet[1290]: E0914 19:06:04.836934    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360509    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.360711    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume podName:f9293d00-1000-4ffa-b978-d08c00eee7e7 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.360695397 +0000 UTC m=+13.858541911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f9293d00-1000-4ffa-b978-d08c00eee7e7-config-volume") pod "coredns-5dd5756b68-qrv2r" (UID: "f9293d00-1000-4ffa-b978-d08c00eee7e7") : object "kube-system"/"coredns" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461710    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461760    1290 projected.go:198] Error preparing data for projected volume kube-api-access-x7fmj for pod default/busybox-5bc68d56bd-8xj5t: object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: E0914 19:06:06.461858    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj podName:a8ee02a0-c9ae-454d-902d-c10e99f35812 nodeName:}" failed. No retries permitted until 2023-09-14 19:06:10.461842696 +0000 UTC m=+13.959689202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x7fmj" (UniqueName: "kubernetes.io/projected/a8ee02a0-c9ae-454d-902d-c10e99f35812-kube-api-access-x7fmj") pod "busybox-5bc68d56bd-8xj5t" (UID: "a8ee02a0-c9ae-454d-902d-c10e99f35812") : object "default"/"kube-root-ca.crt" not registered
	Sep 14 19:06:06 multinode-040952 kubelet[1290]: I0914 19:06:06.956674    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecedcc81d5040d88abcafe724d7ff2140b999b458d0e93f11b00ad6783066a7b"
	Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.069490    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-8xj5t" podUID="a8ee02a0-c9ae-454d-902d-c10e99f35812"
	Sep 14 19:06:08 multinode-040952 kubelet[1290]: E0914 19:06:08.077183    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-qrv2r" podUID="f9293d00-1000-4ffa-b978-d08c00eee7e7"
	Sep 14 19:06:09 multinode-040952 kubelet[1290]: I0914 19:06:09.602526    1290 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 14 19:06:11 multinode-040952 kubelet[1290]: I0914 19:06:11.624814    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b65f9b32fcb4cf47bc4f4ec371810e2c59f9379e67003f5d435073d09f33200"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.964746    1290 scope.go:117] "RemoveContainer" containerID="bda018c9a602e0ece971914d9996bb4c59847a4417bdfa7d7cfee531dbe1b929"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: I0914 19:06:34.965104    1290 scope.go:117] "RemoveContainer" containerID="c9e2f6411addd9aa2f754f78fda3ce71ac8bf7bb5ff3f65f3c0511f08e429929"
	Sep 14 19:06:34 multinode-040952 kubelet[1290]: E0914 19:06:34.965323    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8f25fe5b-237f-415a-baca-e4342106bb4d)\"" pod="kube-system/storage-provisioner" podUID="8f25fe5b-237f-415a-baca-e4342106bb4d"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-040952 -n multinode-040952
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-040952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.02s)

                                                
                                    

Test pass (284/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.51
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.1/json-events 7.1
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.54
20 TestOffline 135.59
22 TestAddons/Setup 152.59
24 TestAddons/parallel/Registry 15.8
25 TestAddons/parallel/Ingress 23.73
26 TestAddons/parallel/InspektorGadget 10.86
27 TestAddons/parallel/MetricsServer 6.2
28 TestAddons/parallel/HelmTiller 14.55
30 TestAddons/parallel/CSI 54.98
31 TestAddons/parallel/Headlamp 15.34
32 TestAddons/parallel/CloudSpanner 5.69
35 TestAddons/serial/GCPAuth/Namespaces 0.13
36 TestAddons/StoppedEnableDisable 13.35
37 TestCertOptions 53.93
38 TestCertExpiration 321.74
39 TestDockerFlags 58.54
40 TestForceSystemdFlag 56.88
41 TestForceSystemdEnv 88.86
43 TestKVMDriverInstallOrUpdate 3.61
47 TestErrorSpam/setup 51.57
48 TestErrorSpam/start 0.33
49 TestErrorSpam/status 0.75
50 TestErrorSpam/pause 1.16
51 TestErrorSpam/unpause 1.32
52 TestErrorSpam/stop 4.2
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 63.23
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 38.53
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.35
64 TestFunctional/serial/CacheCmd/cache/add_local 1.31
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 41.7
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.07
75 TestFunctional/serial/LogsFileCmd 1.08
76 TestFunctional/serial/InvalidService 5.19
78 TestFunctional/parallel/ConfigCmd 0.3
79 TestFunctional/parallel/DashboardCmd 19.96
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.15
82 TestFunctional/parallel/StatusCmd 0.88
86 TestFunctional/parallel/ServiceCmdConnect 10.59
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 56.94
90 TestFunctional/parallel/SSHCmd 0.47
91 TestFunctional/parallel/CpCmd 0.88
92 TestFunctional/parallel/MySQL 40.57
93 TestFunctional/parallel/FileSync 0.22
94 TestFunctional/parallel/CertSync 1.52
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
102 TestFunctional/parallel/License 0.2
103 TestFunctional/parallel/ServiceCmd/DeployApp 15.22
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
105 TestFunctional/parallel/ProfileCmd/profile_list 0.29
106 TestFunctional/parallel/MountCmd/any-port 11.89
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
108 TestFunctional/parallel/MountCmd/specific-port 1.71
109 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
110 TestFunctional/parallel/ServiceCmd/List 0.44
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
122 TestFunctional/parallel/ServiceCmd/Format 0.31
123 TestFunctional/parallel/Version/short 0.04
124 TestFunctional/parallel/Version/components 0.68
125 TestFunctional/parallel/ServiceCmd/URL 0.33
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.6
131 TestFunctional/parallel/ImageCommands/Setup 1.53
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.31
133 TestFunctional/parallel/DockerEnv/bash 0.88
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.5
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.29
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.29
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.91
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.01
145 TestFunctional/delete_minikube_cached_images 0.01
146 TestGvisorAddon 288.02
149 TestImageBuild/serial/Setup 52.59
150 TestImageBuild/serial/NormalBuild 1.75
151 TestImageBuild/serial/BuildWithBuildArg 1.29
152 TestImageBuild/serial/BuildWithDockerIgnore 0.37
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.27
156 TestIngressAddonLegacy/StartLegacyK8sCluster 81.64
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.46
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 45.33
163 TestJSONOutput/start/Command 103.11
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.56
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.51
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 13.1
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.19
191 TestMainNoArgs 0.04
192 TestMinikubeProfile 107.07
195 TestMountStart/serial/StartWithMountFirst 29.85
196 TestMountStart/serial/VerifyMountFirst 0.36
197 TestMountStart/serial/StartWithMountSecond 28.74
198 TestMountStart/serial/VerifyMountSecond 0.38
199 TestMountStart/serial/DeleteFirst 0.88
200 TestMountStart/serial/VerifyMountPostDelete 0.38
201 TestMountStart/serial/Stop 2.13
202 TestMountStart/serial/RestartStopped 25.89
203 TestMountStart/serial/VerifyMountPostStop 0.39
206 TestMultiNode/serial/FreshStart2Nodes 138.54
207 TestMultiNode/serial/DeployApp2Nodes 5.74
208 TestMultiNode/serial/PingHostFrom2Pods 0.85
209 TestMultiNode/serial/AddNode 50.81
210 TestMultiNode/serial/ProfileList 0.2
211 TestMultiNode/serial/CopyFile 7.2
212 TestMultiNode/serial/StopNode 3.95
213 TestMultiNode/serial/StartAfterStop 32.12
216 TestMultiNode/serial/StopMultiNode 112.58
217 TestMultiNode/serial/RestartMultiNode 95.96
218 TestMultiNode/serial/ValidateNameConflict 56.16
223 TestPreload 182.94
225 TestScheduledStopUnix 123.03
226 TestSkaffold 139.28
229 TestRunningBinaryUpgrade 173.59
231 TestKubernetesUpgrade 277.5
244 TestStoppedBinaryUpgrade/Setup 1.28
245 TestStoppedBinaryUpgrade/Upgrade 189.01
247 TestPause/serial/Start 105.82
255 TestPause/serial/SecondStartNoReconfiguration 52.29
256 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
257 TestPause/serial/Pause 0.63
258 TestPause/serial/VerifyStatus 0.26
259 TestPause/serial/Unpause 0.58
260 TestPause/serial/PauseAgain 0.88
261 TestPause/serial/DeletePaused 1.04
262 TestPause/serial/VerifyDeletedResources 0.4
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
265 TestNoKubernetes/serial/StartWithK8s 63.13
266 TestNoKubernetes/serial/StartWithStopK8s 42.67
267 TestNoKubernetes/serial/Start 34.66
268 TestNetworkPlugins/group/auto/Start 83.92
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
270 TestNoKubernetes/serial/ProfileList 71.6
271 TestNetworkPlugins/group/kindnet/Start 79.72
272 TestNetworkPlugins/group/auto/KubeletFlags 0.22
273 TestNetworkPlugins/group/auto/NetCatPod 12.32
274 TestNoKubernetes/serial/Stop 2.14
275 TestNoKubernetes/serial/StartNoArgs 34.86
276 TestNetworkPlugins/group/auto/DNS 0.19
277 TestNetworkPlugins/group/auto/Localhost 0.15
278 TestNetworkPlugins/group/auto/HairPin 0.16
279 TestNetworkPlugins/group/calico/Start 129.36
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
281 TestNetworkPlugins/group/custom-flannel/Start 130.25
282 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
284 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
285 TestNetworkPlugins/group/kindnet/DNS 0.17
286 TestNetworkPlugins/group/kindnet/Localhost 0.15
287 TestNetworkPlugins/group/kindnet/HairPin 0.16
288 TestNetworkPlugins/group/false/Start 103.54
289 TestNetworkPlugins/group/enable-default-cni/Start 124.53
290 TestNetworkPlugins/group/calico/ControllerPod 5.04
291 TestNetworkPlugins/group/calico/KubeletFlags 0.21
292 TestNetworkPlugins/group/calico/NetCatPod 13.46
293 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
294 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.43
295 TestNetworkPlugins/group/calico/DNS 0.2
296 TestNetworkPlugins/group/calico/Localhost 0.15
297 TestNetworkPlugins/group/calico/HairPin 0.17
298 TestNetworkPlugins/group/custom-flannel/DNS 0.24
299 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
300 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
301 TestNetworkPlugins/group/false/KubeletFlags 0.24
302 TestNetworkPlugins/group/false/NetCatPod 13.54
303 TestNetworkPlugins/group/flannel/Start 83.24
304 TestNetworkPlugins/group/bridge/Start 100.76
305 TestNetworkPlugins/group/false/DNS 0.19
306 TestNetworkPlugins/group/false/Localhost 0.16
307 TestNetworkPlugins/group/false/HairPin 0.17
308 TestNetworkPlugins/group/kubenet/Start 112.55
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
315 TestStartStop/group/old-k8s-version/serial/FirstStart 359.75
316 TestNetworkPlugins/group/flannel/ControllerPod 5.02
317 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
318 TestNetworkPlugins/group/flannel/NetCatPod 12.35
319 TestNetworkPlugins/group/flannel/DNS 0.27
320 TestNetworkPlugins/group/flannel/Localhost 0.19
321 TestNetworkPlugins/group/flannel/HairPin 0.18
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
323 TestNetworkPlugins/group/bridge/NetCatPod 13.35
324 TestNetworkPlugins/group/bridge/DNS 16.09
326 TestStartStop/group/no-preload/serial/FirstStart 92.39
327 TestNetworkPlugins/group/bridge/Localhost 0.22
328 TestNetworkPlugins/group/bridge/HairPin 0.19
329 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
330 TestNetworkPlugins/group/kubenet/NetCatPod 13.37
332 TestStartStop/group/embed-certs/serial/FirstStart 114.59
333 TestNetworkPlugins/group/kubenet/DNS 0.28
334 TestNetworkPlugins/group/kubenet/Localhost 0.23
335 TestNetworkPlugins/group/kubenet/HairPin 0.25
337 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.09
338 TestStartStop/group/no-preload/serial/DeployApp 10.59
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.38
340 TestStartStop/group/no-preload/serial/Stop 13.12
341 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/no-preload/serial/SecondStart 331.52
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.49
344 TestStartStop/group/embed-certs/serial/DeployApp 9.43
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.12
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
348 TestStartStop/group/embed-certs/serial/Stop 13.11
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.22
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
352 TestStartStop/group/embed-certs/serial/SecondStart 349.86
353 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
355 TestStartStop/group/old-k8s-version/serial/Stop 13.12
356 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
357 TestStartStop/group/old-k8s-version/serial/SecondStart 451.34
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.02
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
361 TestStartStop/group/no-preload/serial/Pause 2.51
363 TestStartStop/group/newest-cni/serial/FirstStart 74.24
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.05
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 22.02
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/embed-certs/serial/Pause 2.67
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
374 TestStartStop/group/newest-cni/serial/Stop 13.1
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
376 TestStartStop/group/newest-cni/serial/SecondStart 47.02
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
380 TestStartStop/group/newest-cni/serial/Pause 2.27
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
384 TestStartStop/group/old-k8s-version/serial/Pause 2.28
x
+
TestDownloadOnly/v1.16.0/json-events (13.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246148 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246148 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (13.507673459s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246148
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246148: exit status 85 (57.987719ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246148 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-246148        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:43:08
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:43:08.483121   14519 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:43:08.483363   14519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:08.483372   14519 out.go:309] Setting ErrFile to fd 2...
	I0914 18:43:08.483376   14519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:08.483553   14519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	W0914 18:43:08.483662   14519 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17217-7285/.minikube/config/config.json: open /home/jenkins/minikube-integration/17217-7285/.minikube/config/config.json: no such file or directory
	I0914 18:43:08.484216   14519 out.go:303] Setting JSON to true
	I0914 18:43:08.485067   14519 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1538,"bootTime":1694715451,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:43:08.485123   14519 start.go:138] virtualization: kvm guest
	I0914 18:43:08.487551   14519 out.go:97] [download-only-246148] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:43:08.489124   14519 out.go:169] MINIKUBE_LOCATION=17217
	W0914 18:43:08.487637   14519 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 18:43:08.487714   14519 notify.go:220] Checking for updates...
	I0914 18:43:08.491858   14519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:43:08.493305   14519 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 18:43:08.494632   14519 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 18:43:08.496048   14519 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 18:43:08.499332   14519 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 18:43:08.499574   14519 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:43:08.615293   14519 out.go:97] Using the kvm2 driver based on user configuration
	I0914 18:43:08.615315   14519 start.go:298] selected driver: kvm2
	I0914 18:43:08.615323   14519 start.go:902] validating driver "kvm2" against <nil>
	I0914 18:43:08.615621   14519 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:43:08.615743   14519 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:43:08.630022   14519 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 18:43:08.630066   14519 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0914 18:43:08.630491   14519 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0914 18:43:08.630636   14519 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 18:43:08.630682   14519 cni.go:84] Creating CNI manager for ""
	I0914 18:43:08.630698   14519 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 18:43:08.630704   14519 start_flags.go:321] config:
	{Name:download-only-246148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-246148 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:08.630891   14519 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:43:08.632717   14519 out.go:97] Downloading VM boot image ...
	I0914 18:43:08.632744   14519 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17217-7285/.minikube/cache/iso/amd64/minikube-v1.31.0-1694468241-17194-amd64.iso
	I0914 18:43:11.200602   14519 out.go:97] Starting control plane node download-only-246148 in cluster download-only-246148
	I0914 18:43:11.200623   14519 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 18:43:11.227192   14519 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0914 18:43:11.227224   14519 cache.go:57] Caching tarball of preloaded images
	I0914 18:43:11.227405   14519 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 18:43:11.229885   14519 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0914 18:43:11.229912   14519 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0914 18:43:11.260826   14519 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0914 18:43:14.962413   14519 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0914 18:43:14.962506   14519 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0914 18:43:15.724481   14519 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0914 18:43:15.724829   14519 profile.go:148] Saving config to /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/download-only-246148/config.json ...
	I0914 18:43:15.724856   14519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/download-only-246148/config.json: {Name:mkd3086a6efad5eb42f38f672aa687ed0ccc731f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:43:15.725005   14519 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0914 18:43:15.725159   14519 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17217-7285/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-246148"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (7.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246148 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246148 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=kvm2 : (7.103095366s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (7.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246148
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246148: exit status 85 (55.496306ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246148 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-246148        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-246148 | jenkins | v1.31.2 | 14 Sep 23 18:43 UTC |          |
	|         | -p download-only-246148        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/14 18:43:22
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:43:22.053323   14588 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:43:22.053442   14588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:22.053452   14588 out.go:309] Setting ErrFile to fd 2...
	I0914 18:43:22.053474   14588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:43:22.053735   14588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	W0914 18:43:22.053883   14588 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17217-7285/.minikube/config/config.json: open /home/jenkins/minikube-integration/17217-7285/.minikube/config/config.json: no such file or directory
	I0914 18:43:22.054308   14588 out.go:303] Setting JSON to true
	I0914 18:43:22.055119   14588 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1551,"bootTime":1694715451,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:43:22.055176   14588 start.go:138] virtualization: kvm guest
	I0914 18:43:22.057527   14588 out.go:97] [download-only-246148] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:43:22.059114   14588 out.go:169] MINIKUBE_LOCATION=17217
	I0914 18:43:22.057675   14588 notify.go:220] Checking for updates...
	I0914 18:43:22.062053   14588 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:43:22.063762   14588 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 18:43:22.065319   14588 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 18:43:22.066791   14588 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0914 18:43:22.069584   14588 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 18:43:22.070019   14588 config.go:182] Loaded profile config "download-only-246148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0914 18:43:22.070068   14588 start.go:810] api.Load failed for download-only-246148: filestore "download-only-246148": Docker machine "download-only-246148" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 18:43:22.070161   14588 driver.go:373] Setting default libvirt URI to qemu:///system
	W0914 18:43:22.070205   14588 start.go:810] api.Load failed for download-only-246148: filestore "download-only-246148": Docker machine "download-only-246148" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0914 18:43:22.106158   14588 out.go:97] Using the kvm2 driver based on existing profile
	I0914 18:43:22.106194   14588 start.go:298] selected driver: kvm2
	I0914 18:43:22.106203   14588 start.go:902] validating driver "kvm2" against &{Name:download-only-246148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-246148 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:22.106643   14588 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:43:22.106725   14588 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17217-7285/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0914 18:43:22.122630   14588 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0914 18:43:22.123257   14588 cni.go:84] Creating CNI manager for ""
	I0914 18:43:22.123278   14588 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 18:43:22.123289   14588 start_flags.go:321] config:
	{Name:download-only-246148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-246148 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:43:22.123472   14588 iso.go:125] acquiring lock: {Name:mk542b08865b5897b02c4d217212972b66d5575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:43:22.125203   14588 out.go:97] Starting control plane node download-only-246148 in cluster download-only-246148
	I0914 18:43:22.125218   14588 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 18:43:22.149647   14588 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	I0914 18:43:22.149674   14588 cache.go:57] Caching tarball of preloaded images
	I0914 18:43:22.149813   14588 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0914 18:43:22.151740   14588 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0914 18:43:22.151768   14588 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 ...
	I0914 18:43:22.188022   14588 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4?checksum=md5:e86539672b8ce9a3040455131c2fbb87 -> /home/jenkins/minikube-integration/17217-7285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-246148"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-246148
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-352024 --alsologtostderr --binary-mirror http://127.0.0.1:37529 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-352024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-352024
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (135.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-573899 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-573899 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m14.242152446s)
helpers_test.go:175: Cleaning up "offline-docker-573899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-573899
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-573899: (1.347520549s)
--- PASS: TestOffline (135.59s)

                                                
                                    
x
+
TestAddons/Setup (152.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-417207 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-417207 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.585026409s)
--- PASS: TestAddons/Setup (152.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 15.688639ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vhnzx" [dc3248d8-d87a-4132-b16a-d44b1adf03de] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.022461672s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cm6rx" [0edf5f24-2703-4eff-991c-c2e2277052cc] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.032383708s
addons_test.go:316: (dbg) Run:  kubectl --context addons-417207 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-417207 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-417207 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.017762011s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 ip
2023/09/14 18:46:17 [DEBUG] GET http://192.168.39.47:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-417207 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-417207 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-417207 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [06578662-703f-4cff-b4d6-0996e9eee66d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [06578662-703f-4cff-b4d6-0996e9eee66d] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.014047886s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-417207 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.47
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-417207 addons disable ingress-dns --alsologtostderr -v=1: (2.188219251s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-417207 addons disable ingress --alsologtostderr -v=1: (7.728461031s)
--- PASS: TestAddons/parallel/Ingress (23.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hcgcg" [2445e2e5-fb04-4597-a7de-1bd4929a375d] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.017666101s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-417207
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-417207: (5.841908719s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 19.838604ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zznz9" [cde15256-8d91-4f98-ac41-eb1bb1b8097c] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018750872s
addons_test.go:391: (dbg) Run:  kubectl --context addons-417207 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-417207 addons disable metrics-server --alsologtostderr -v=1: (1.079393296s)
--- PASS: TestAddons/parallel/MetricsServer (6.20s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.55s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 20.212556ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-fr9h6" [c4b0a3e7-c211-4e44-a274-dee9f2c7e05e] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.031646548s
addons_test.go:449: (dbg) Run:  kubectl --context addons-417207 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-417207 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.928424429s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 10.476549ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-417207 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-417207 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3495aef4-54a1-4fe9-ae43-bb651f32d725] Pending
helpers_test.go:344: "task-pv-pod" [3495aef4-54a1-4fe9-ae43-bb651f32d725] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3495aef4-54a1-4fe9-ae43-bb651f32d725] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.020374847s
addons_test.go:560: (dbg) Run:  kubectl --context addons-417207 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-417207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-417207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-417207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-417207 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-417207 delete pod task-pv-pod: (1.110404991s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-417207 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-417207 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-417207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-417207 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e43e9a45-fcde-414b-a73d-d78320934925] Pending
helpers_test.go:344: "task-pv-pod-restore" [e43e9a45-fcde-414b-a73d-d78320934925] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e43e9a45-fcde-414b-a73d-d78320934925] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.019257243s
addons_test.go:602: (dbg) Run:  kubectl --context addons-417207 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-417207 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-417207 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-417207 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.681440698s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-417207 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-417207 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-417207 --alsologtostderr -v=1: (1.299472323s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-9n9rd" [24676042-936e-415e-afe8-c0ccc1dd96e2] Pending
helpers_test.go:344: "headlamp-699c48fb74-9n9rd" [24676042-936e-415e-afe8-c0ccc1dd96e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-9n9rd" [24676042-936e-415e-afe8-c0ccc1dd96e2] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.035129549s
--- PASS: TestAddons/parallel/Headlamp (15.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-zjshp" [8e5347a3-d404-453c-8309-e18edd0e9c44] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012914219s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-417207
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-417207 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-417207 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-417207
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-417207: (13.110581885s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-417207
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-417207
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-417207
--- PASS: TestAddons/StoppedEnableDisable (13.35s)

                                                
                                    
x
+
TestCertOptions (53.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-811622 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0914 19:23:29.910234   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:29.915540   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:29.925862   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:29.946232   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:29.986535   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:30.067558   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:30.227831   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:30.548484   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:31.189573   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:23:32.470683   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-811622 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (52.233129283s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-811622 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-811622 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-811622 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-811622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-811622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-811622: (1.202802957s)
--- PASS: TestCertOptions (53.93s)

                                                
                                    
x
+
TestCertExpiration (321.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-395586 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-395586 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m45.565422805s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-395586 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-395586 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (35.16905771s)
helpers_test.go:175: Cleaning up "cert-expiration-395586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-395586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-395586: (1.00894383s)
--- PASS: TestCertExpiration (321.74s)

                                                
                                    
x
+
TestDockerFlags (58.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-283678 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-283678 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (57.023893515s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-283678 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-283678 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-283678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-283678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-283678: (1.01972005s)
--- PASS: TestDockerFlags (58.54s)

                                                
                                    
x
+
TestForceSystemdFlag (56.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-612174 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-612174 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (55.466674699s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-612174 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-612174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-612174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-612174: (1.139043935s)
--- PASS: TestForceSystemdFlag (56.88s)

                                                
                                    
x
+
TestForceSystemdEnv (88.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-500940 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-500940 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m27.469495905s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-500940 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-500940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-500940
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-500940: (1.069692087s)
--- PASS: TestForceSystemdEnv (88.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0914 19:23:50.392614   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.61s)

                                                
                                    
x
+
TestErrorSpam/setup (51.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-742018 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-742018 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-742018 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-742018 --driver=kvm2 : (51.567493311s)
--- PASS: TestErrorSpam/setup (51.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (4.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 stop: (4.077501288s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-742018 --log_dir /tmp/nospam-742018 stop
--- PASS: TestErrorSpam/stop (4.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17217-7285/.minikube/files/etc/test/nested/copy/14506/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-281336 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.229010026s)
--- PASS: TestFunctional/serial/StartWithProxy (63.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-281336 --alsologtostderr -v=8: (38.532734376s)
functional_test.go:659: soft start took 38.533504532s for "functional-281336" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-281336 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-281336 /tmp/TestFunctionalserialCacheCmdcacheadd_local2801703419/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache add minikube-local-cache-test:functional-281336
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 cache add minikube-local-cache-test:functional-281336: (1.00386718s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache delete minikube-local-cache-test:functional-281336
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-281336
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.51522ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 kubectl -- --context functional-281336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-281336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0914 18:51:02.658525   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.664201   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.674439   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.694708   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.735046   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.815261   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:02.975711   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-281336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.702941228s)
functional_test.go:757: restart took 41.703062391s for "functional-281336" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-281336 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 logs
E0914 18:51:03.296557   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:51:03.937237   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 logs: (1.073076801s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 logs --file /tmp/TestFunctionalserialLogsFileCmd524450549/001/logs.txt
E0914 18:51:05.217595   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 logs --file /tmp/TestFunctionalserialLogsFileCmd524450549/001/logs.txt: (1.081350511s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-281336 apply -f testdata/invalidsvc.yaml
E0914 18:51:07.778325   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-281336
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-281336: exit status 115 (286.899356ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.139:30788 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-281336 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-281336 delete -f testdata/invalidsvc.yaml: (1.583896255s)
--- PASS: TestFunctional/serial/InvalidService (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 config get cpus: exit status 14 (53.281103ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 config get cpus: exit status 14 (44.868352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-281336 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-281336 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19757: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-281336 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (142.582978ms)

                                                
                                                
-- stdout --
	* [functional-281336] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:51:12.954459   19627 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:51:12.954605   19627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:51:12.954640   19627 out.go:309] Setting ErrFile to fd 2...
	I0914 18:51:12.954657   19627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:51:12.954964   19627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 18:51:12.955695   19627 out.go:303] Setting JSON to false
	I0914 18:51:12.957015   19627 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2022,"bootTime":1694715451,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:51:12.957127   19627 start.go:138] virtualization: kvm guest
	I0914 18:51:12.959427   19627 out.go:177] * [functional-281336] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0914 18:51:12.960918   19627 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:51:12.960959   19627 notify.go:220] Checking for updates...
	I0914 18:51:12.962303   19627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:51:12.963729   19627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 18:51:12.965065   19627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 18:51:12.966362   19627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:51:12.967666   19627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:51:12.969394   19627 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 18:51:12.969793   19627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 18:51:12.969842   19627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:51:12.988363   19627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39393
	I0914 18:51:12.988935   19627 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:51:12.989622   19627 main.go:141] libmachine: Using API Version  1
	I0914 18:51:12.989660   19627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:51:12.990048   19627 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:51:12.990238   19627 main.go:141] libmachine: (functional-281336) Calling .DriverName
	I0914 18:51:12.990542   19627 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:51:12.990946   19627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 18:51:12.990988   19627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:51:13.006520   19627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0914 18:51:13.006978   19627 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:51:13.007459   19627 main.go:141] libmachine: Using API Version  1
	I0914 18:51:13.007480   19627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:51:13.008000   19627 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:51:13.008209   19627 main.go:141] libmachine: (functional-281336) Calling .DriverName
	I0914 18:51:13.041375   19627 out.go:177] * Using the kvm2 driver based on existing profile
	I0914 18:51:13.042708   19627 start.go:298] selected driver: kvm2
	I0914 18:51:13.042723   19627 start.go:902] validating driver "kvm2" against &{Name:functional-281336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-281336 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:51:13.042808   19627 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:51:13.044944   19627 out.go:177] 
	W0914 18:51:13.046345   19627 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 18:51:13.047811   19627 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-281336 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
E0914 18:51:12.899319   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-281336 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (145.492731ms)

                                                
                                                
-- stdout --
	* [functional-281336] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:51:12.813247   19584 out.go:296] Setting OutFile to fd 1 ...
	I0914 18:51:12.813531   19584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:51:12.813540   19584 out.go:309] Setting ErrFile to fd 2...
	I0914 18:51:12.813545   19584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 18:51:12.813773   19584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 18:51:12.814310   19584 out.go:303] Setting JSON to false
	I0914 18:51:12.815275   19584 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2022,"bootTime":1694715451,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0914 18:51:12.815375   19584 start.go:138] virtualization: kvm guest
	I0914 18:51:12.817891   19584 out.go:177] * [functional-281336] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0914 18:51:12.819869   19584 out.go:177]   - MINIKUBE_LOCATION=17217
	I0914 18:51:12.819870   19584 notify.go:220] Checking for updates...
	I0914 18:51:12.821487   19584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:51:12.822833   19584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	I0914 18:51:12.824123   19584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	I0914 18:51:12.825527   19584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0914 18:51:12.826873   19584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:51:12.828804   19584 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 18:51:12.829410   19584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 18:51:12.829505   19584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:51:12.845095   19584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0914 18:51:12.845519   19584 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:51:12.846066   19584 main.go:141] libmachine: Using API Version  1
	I0914 18:51:12.846099   19584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:51:12.846497   19584 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:51:12.846714   19584 main.go:141] libmachine: (functional-281336) Calling .DriverName
	I0914 18:51:12.847027   19584 driver.go:373] Setting default libvirt URI to qemu:///system
	I0914 18:51:12.847450   19584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 18:51:12.847504   19584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 18:51:12.861724   19584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0914 18:51:12.862171   19584 main.go:141] libmachine: () Calling .GetVersion
	I0914 18:51:12.862691   19584 main.go:141] libmachine: Using API Version  1
	I0914 18:51:12.862714   19584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 18:51:12.863094   19584 main.go:141] libmachine: () Calling .GetMachineName
	I0914 18:51:12.863295   19584 main.go:141] libmachine: (functional-281336) Calling .DriverName
	I0914 18:51:12.898092   19584 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0914 18:51:12.899733   19584 start.go:298] selected driver: kvm2
	I0914 18:51:12.899750   19584 start.go:902] validating driver "kvm2" against &{Name:functional-281336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17194/minikube-v1.31.0-1694468241-17194-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-281336 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0914 18:51:12.899889   19584 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:51:12.902397   19584 out.go:177] 
	W0914 18:51:12.903855   19584 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 18:51:12.905318   19584 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-281336 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-281336 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-g54p2" [2fa833eb-90af-4e0c-ac30-dbf774e4d4bb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2023/09/14 18:51:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-connect-55497b8b78-g54p2" [2fa833eb-90af-4e0c-ac30-dbf774e4d4bb] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.027617651s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.139:30157
functional_test.go:1674: http://192.168.50.139:30157: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-g54p2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.139:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.139:30157
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ae742e05-23ca-4173-a9d1-538d7b4ae83f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.026204432s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-281336 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-281336 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-281336 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281336 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b1ecca8d-6aeb-4e69-8bde-9a2168540a24] Pending
helpers_test.go:344: "sp-pod" [b1ecca8d-6aeb-4e69-8bde-9a2168540a24] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b1ecca8d-6aeb-4e69-8bde-9a2168540a24] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.026242101s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-281336 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-281336 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-281336 delete -f testdata/storage-provisioner/pod.yaml: (1.647986241s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281336 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c243731a-b800-4882-a447-3bd1ddb15f6d] Pending
helpers_test.go:344: "sp-pod" [c243731a-b800-4882-a447-3bd1ddb15f6d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c243731a-b800-4882-a447-3bd1ddb15f6d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.016777368s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-281336 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh -n functional-281336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 cp functional-281336:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1398921689/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh -n functional-281336 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-281336 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-45thw" [96ed57ce-9d7f-49b0-aa52-d3f1cdde57e3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-45thw" [96ed57ce-9d7f-49b0-aa52-d3f1cdde57e3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.020413768s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;": exit status 1 (191.193773ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;": exit status 1 (177.274417ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;": exit status 1 (173.871504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-281336 exec mysql-859648c796-45thw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14506/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /etc/test/nested/copy/14506/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14506.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /etc/ssl/certs/14506.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14506.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /usr/share/ca-certificates/14506.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/145062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /etc/ssl/certs/145062.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/145062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /usr/share/ca-certificates/145062.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-281336 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh "sudo systemctl is-active crio": exit status 1 (223.928332ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-281336 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-281336 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-785nd" [f9b4f796-4027-425a-a441-b1bd4faed82c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-785nd" [f9b4f796-4027-425a-a441-b1bd4faed82c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.027703536s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "237.782299ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "55.944611ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdany-port2582472605/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694717471510117864" to /tmp/TestFunctionalparallelMountCmdany-port2582472605/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694717471510117864" to /tmp/TestFunctionalparallelMountCmdany-port2582472605/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694717471510117864" to /tmp/TestFunctionalparallelMountCmdany-port2582472605/001/test-1694717471510117864
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.376398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 18:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 18:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 18:51 test-1694717471510117864
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh cat /mount-9p/test-1694717471510117864
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-281336 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d96ec698-afde-40df-9197-7d30482001a1] Pending
helpers_test.go:344: "busybox-mount" [d96ec698-afde-40df-9197-7d30482001a1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d96ec698-afde-40df-9197-7d30482001a1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d96ec698-afde-40df-9197-7d30482001a1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.020748464s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-281336 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdany-port2582472605/001:/mount-9p --alsologtostderr -v=1] ...
E0914 18:51:23.140349   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "315.128487ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "41.00971ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdspecific-port4052536333/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (222.340172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdspecific-port4052536333/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh "sudo umount -f /mount-9p": exit status 1 (188.99701ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-281336 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdspecific-port4052536333/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T" /mount1: exit status 1 (255.766201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-281336 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-281336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268023553/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service list -o json
functional_test.go:1493: Took "467.592548ms" to run "out/minikube-linux-amd64 -p functional-281336 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.139:30980
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.139:30980
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281336 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-281336
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-281336
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281336 image ls --format short --alsologtostderr:
I0914 18:51:49.785826   21530 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:49.785967   21530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:49.785979   21530 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:49.785987   21530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:49.786259   21530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 18:51:49.787027   21530 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:49.787179   21530 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:49.787722   21530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:49.787789   21530 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:49.801801   21530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
I0914 18:51:49.802224   21530 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:49.802809   21530 main.go:141] libmachine: Using API Version  1
I0914 18:51:49.802853   21530 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:49.803237   21530 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:49.803417   21530 main.go:141] libmachine: (functional-281336) Calling .GetState
I0914 18:51:49.805584   21530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:49.805634   21530 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:49.819767   21530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
I0914 18:51:49.820221   21530 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:49.820747   21530 main.go:141] libmachine: Using API Version  1
I0914 18:51:49.820770   21530 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:49.821119   21530 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:49.821320   21530 main.go:141] libmachine: (functional-281336) Calling .DriverName
I0914 18:51:49.821527   21530 ssh_runner.go:195] Run: systemctl --version
I0914 18:51:49.821552   21530 main.go:141] libmachine: (functional-281336) Calling .GetSSHHostname
I0914 18:51:49.824632   21530 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:49.825064   21530 main.go:141] libmachine: (functional-281336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:b6:ee", ip: ""} in network mk-functional-281336: {Iface:virbr1 ExpiryTime:2023-09-14 19:48:50 +0000 UTC Type:0 Mac:52:54:00:d8:b6:ee Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-281336 Clientid:01:52:54:00:d8:b6:ee}
I0914 18:51:49.825092   21530 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined IP address 192.168.50.139 and MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:49.825231   21530 main.go:141] libmachine: (functional-281336) Calling .GetSSHPort
I0914 18:51:49.825397   21530 main.go:141] libmachine: (functional-281336) Calling .GetSSHKeyPath
I0914 18:51:49.825562   21530 main.go:141] libmachine: (functional-281336) Calling .GetSSHUsername
I0914 18:51:49.825690   21530 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/functional-281336/id_rsa Username:docker}
I0914 18:51:49.931975   21530 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0914 18:51:49.960807   21530 main.go:141] libmachine: Making call to close driver server
I0914 18:51:49.960823   21530 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:49.961072   21530 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:49.961090   21530 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
I0914 18:51:49.961096   21530 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:49.961105   21530 main.go:141] libmachine: Making call to close driver server
I0914 18:51:49.961113   21530 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:49.961424   21530 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:49.961439   21530 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:49.961475   21530 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281336 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b462ce0c8b1ff | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 6cdbabde3874e | 73.1MB |
| docker.io/library/nginx                     | latest            | f5a6b296b8a29 | 187MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 821b3dfea27be | 122MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-281336 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-281336 | 132009f48cc8e | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.1           | 5c801295c21d0 | 126MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281336 image ls --format table --alsologtostderr:
I0914 18:51:50.214306   21627 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:50.214411   21627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.214420   21627 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:50.214427   21627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.214616   21627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 18:51:50.215152   21627 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.215268   21627 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.215628   21627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.215687   21627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.230928   21627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
I0914 18:51:50.231962   21627 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.232573   21627 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.232604   21627 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.232953   21627 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.233136   21627 main.go:141] libmachine: (functional-281336) Calling .GetState
I0914 18:51:50.235085   21627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.235133   21627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.254970   21627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
I0914 18:51:50.255497   21627 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.255973   21627 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.255995   21627 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.256349   21627 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.256557   21627 main.go:141] libmachine: (functional-281336) Calling .DriverName
I0914 18:51:50.256760   21627 ssh_runner.go:195] Run: systemctl --version
I0914 18:51:50.256786   21627 main.go:141] libmachine: (functional-281336) Calling .GetSSHHostname
I0914 18:51:50.259864   21627 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.260233   21627 main.go:141] libmachine: (functional-281336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:b6:ee", ip: ""} in network mk-functional-281336: {Iface:virbr1 ExpiryTime:2023-09-14 19:48:50 +0000 UTC Type:0 Mac:52:54:00:d8:b6:ee Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-281336 Clientid:01:52:54:00:d8:b6:ee}
I0914 18:51:50.260292   21627 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined IP address 192.168.50.139 and MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.260402   21627 main.go:141] libmachine: (functional-281336) Calling .GetSSHPort
I0914 18:51:50.260586   21627 main.go:141] libmachine: (functional-281336) Calling .GetSSHKeyPath
I0914 18:51:50.260731   21627 main.go:141] libmachine: (functional-281336) Calling .GetSSHUsername
I0914 18:51:50.260837   21627 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/functional-281336/id_rsa Username:docker}
I0914 18:51:50.350680   21627 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0914 18:51:50.382378   21627 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.382394   21627 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.382658   21627 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.382685   21627 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:50.382696   21627 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.382706   21627 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.382945   21627 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
I0914 18:51:50.382959   21627 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.382972   21627 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281336 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-281336"],"size":"32900000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"
744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"73100000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],
"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126000000"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"132009f48cc8e1ced94e73d67d6bae47f36a1c20cfc7bf956a3636b442b241e6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-281336"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281336 image ls --format json --alsologtostderr:
I0914 18:51:50.011412   21574 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:50.011508   21574 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.011517   21574 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:50.011521   21574 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.011722   21574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 18:51:50.012281   21574 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.012376   21574 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.012734   21574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.012778   21574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.028320   21574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
I0914 18:51:50.028769   21574 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.029353   21574 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.029375   21574 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.029743   21574 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.029952   21574 main.go:141] libmachine: (functional-281336) Calling .GetState
I0914 18:51:50.032716   21574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.032782   21574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.048187   21574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
I0914 18:51:50.048734   21574 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.049277   21574 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.049293   21574 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.049756   21574 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.049968   21574 main.go:141] libmachine: (functional-281336) Calling .DriverName
I0914 18:51:50.050170   21574 ssh_runner.go:195] Run: systemctl --version
I0914 18:51:50.050203   21574 main.go:141] libmachine: (functional-281336) Calling .GetSSHHostname
I0914 18:51:50.053286   21574 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.053709   21574 main.go:141] libmachine: (functional-281336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:b6:ee", ip: ""} in network mk-functional-281336: {Iface:virbr1 ExpiryTime:2023-09-14 19:48:50 +0000 UTC Type:0 Mac:52:54:00:d8:b6:ee Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-281336 Clientid:01:52:54:00:d8:b6:ee}
I0914 18:51:50.053729   21574 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined IP address 192.168.50.139 and MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.053937   21574 main.go:141] libmachine: (functional-281336) Calling .GetSSHPort
I0914 18:51:50.054089   21574 main.go:141] libmachine: (functional-281336) Calling .GetSSHKeyPath
I0914 18:51:50.054243   21574 main.go:141] libmachine: (functional-281336) Calling .GetSSHUsername
I0914 18:51:50.054373   21574 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/functional-281336/id_rsa Username:docker}
I0914 18:51:50.140349   21574 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0914 18:51:50.171326   21574 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.171349   21574 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.171598   21574 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.171620   21574 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:50.171629   21574 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.171639   21574 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.171889   21574 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.171900   21574 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
I0914 18:51:50.171904   21574 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281336 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "73100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-281336
size: "32900000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126000000"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "60100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 132009f48cc8e1ced94e73d67d6bae47f36a1c20cfc7bf956a3636b442b241e6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-281336
size: "30"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281336 image ls --format yaml --alsologtostderr:
I0914 18:51:49.786540   21529 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:49.786769   21529 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:49.786778   21529 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:49.786782   21529 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:49.786969   21529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 18:51:49.787498   21529 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:49.787591   21529 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:49.787963   21529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:49.788003   21529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:49.802073   21529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
I0914 18:51:49.802642   21529 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:49.803272   21529 main.go:141] libmachine: Using API Version  1
I0914 18:51:49.803296   21529 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:49.803673   21529 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:49.803849   21529 main.go:141] libmachine: (functional-281336) Calling .GetState
I0914 18:51:49.805649   21529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:49.805681   21529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:49.819614   21529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
I0914 18:51:49.820244   21529 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:49.820728   21529 main.go:141] libmachine: Using API Version  1
I0914 18:51:49.820756   21529 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:49.821119   21529 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:49.821379   21529 main.go:141] libmachine: (functional-281336) Calling .DriverName
I0914 18:51:49.821623   21529 ssh_runner.go:195] Run: systemctl --version
I0914 18:51:49.821653   21529 main.go:141] libmachine: (functional-281336) Calling .GetSSHHostname
I0914 18:51:49.824815   21529 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:49.825138   21529 main.go:141] libmachine: (functional-281336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:b6:ee", ip: ""} in network mk-functional-281336: {Iface:virbr1 ExpiryTime:2023-09-14 19:48:50 +0000 UTC Type:0 Mac:52:54:00:d8:b6:ee Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-281336 Clientid:01:52:54:00:d8:b6:ee}
I0914 18:51:49.825176   21529 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined IP address 192.168.50.139 and MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:49.825386   21529 main.go:141] libmachine: (functional-281336) Calling .GetSSHPort
I0914 18:51:49.825627   21529 main.go:141] libmachine: (functional-281336) Calling .GetSSHKeyPath
I0914 18:51:49.825776   21529 main.go:141] libmachine: (functional-281336) Calling .GetSSHUsername
I0914 18:51:49.825913   21529 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/functional-281336/id_rsa Username:docker}
I0914 18:51:49.955161   21529 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0914 18:51:50.000924   21529 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.000945   21529 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.001267   21529 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.001294   21529 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:50.001307   21529 main.go:141] libmachine: Making call to close driver server
I0914 18:51:50.001317   21529 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:50.001574   21529 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:50.001606   21529 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:50.001662   21529 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-281336 ssh pgrep buildkitd: exit status 1 (200.090798ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image build -t localhost/my-image:functional-281336 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image build -t localhost/my-image:functional-281336 testdata/build --alsologtostderr: (3.189554367s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-281336 image build -t localhost/my-image:functional-281336 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3d1213f08f15
Removing intermediate container 3d1213f08f15
---> d992d785c5ea
Step 3/3 : ADD content.txt /
---> 1013ff10691e
Successfully built 1013ff10691e
Successfully tagged localhost/my-image:functional-281336
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-281336 image build -t localhost/my-image:functional-281336 testdata/build --alsologtostderr:
I0914 18:51:50.256231   21638 out.go:296] Setting OutFile to fd 1 ...
I0914 18:51:50.256497   21638 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.256510   21638 out.go:309] Setting ErrFile to fd 2...
I0914 18:51:50.256517   21638 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 18:51:50.256777   21638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
I0914 18:51:50.257399   21638 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.257894   21638 config.go:182] Loaded profile config "functional-281336": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0914 18:51:50.258257   21638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.258295   21638 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.272503   21638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
I0914 18:51:50.272959   21638 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.273453   21638 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.273494   21638 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.273786   21638 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.273957   21638 main.go:141] libmachine: (functional-281336) Calling .GetState
I0914 18:51:50.275815   21638 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0914 18:51:50.275861   21638 main.go:141] libmachine: Launching plugin server for driver kvm2
I0914 18:51:50.289881   21638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
I0914 18:51:50.290214   21638 main.go:141] libmachine: () Calling .GetVersion
I0914 18:51:50.290724   21638 main.go:141] libmachine: Using API Version  1
I0914 18:51:50.290743   21638 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 18:51:50.291101   21638 main.go:141] libmachine: () Calling .GetMachineName
I0914 18:51:50.291294   21638 main.go:141] libmachine: (functional-281336) Calling .DriverName
I0914 18:51:50.291505   21638 ssh_runner.go:195] Run: systemctl --version
I0914 18:51:50.291534   21638 main.go:141] libmachine: (functional-281336) Calling .GetSSHHostname
I0914 18:51:50.294354   21638 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.294767   21638 main.go:141] libmachine: (functional-281336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:b6:ee", ip: ""} in network mk-functional-281336: {Iface:virbr1 ExpiryTime:2023-09-14 19:48:50 +0000 UTC Type:0 Mac:52:54:00:d8:b6:ee Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-281336 Clientid:01:52:54:00:d8:b6:ee}
I0914 18:51:50.294809   21638 main.go:141] libmachine: (functional-281336) DBG | domain functional-281336 has defined IP address 192.168.50.139 and MAC address 52:54:00:d8:b6:ee in network mk-functional-281336
I0914 18:51:50.294926   21638 main.go:141] libmachine: (functional-281336) Calling .GetSSHPort
I0914 18:51:50.295110   21638 main.go:141] libmachine: (functional-281336) Calling .GetSSHKeyPath
I0914 18:51:50.295278   21638 main.go:141] libmachine: (functional-281336) Calling .GetSSHUsername
I0914 18:51:50.295421   21638 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/functional-281336/id_rsa Username:docker}
I0914 18:51:50.401124   21638 build_images.go:151] Building image from path: /tmp/build.2613720835.tar
I0914 18:51:50.401183   21638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 18:51:50.414217   21638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2613720835.tar
I0914 18:51:50.420903   21638 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2613720835.tar: stat -c "%s %y" /var/lib/minikube/build/build.2613720835.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2613720835.tar': No such file or directory
I0914 18:51:50.420934   21638 ssh_runner.go:362] scp /tmp/build.2613720835.tar --> /var/lib/minikube/build/build.2613720835.tar (3072 bytes)
I0914 18:51:50.448664   21638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2613720835
I0914 18:51:50.458151   21638 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2613720835 -xf /var/lib/minikube/build/build.2613720835.tar
I0914 18:51:50.466962   21638 docker.go:339] Building image: /var/lib/minikube/build/build.2613720835
I0914 18:51:50.467010   21638 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-281336 /var/lib/minikube/build/build.2613720835
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0914 18:51:53.362563   21638 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-281336 /var/lib/minikube/build/build.2613720835: (2.895530109s)
I0914 18:51:53.362628   21638 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2613720835
I0914 18:51:53.375049   21638 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2613720835.tar
I0914 18:51:53.391518   21638 build_images.go:207] Built localhost/my-image:functional-281336 from /tmp/build.2613720835.tar
I0914 18:51:53.391552   21638 build_images.go:123] succeeded building to: functional-281336
I0914 18:51:53.391556   21638 build_images.go:124] failed building to: 
I0914 18:51:53.391580   21638 main.go:141] libmachine: Making call to close driver server
I0914 18:51:53.391591   21638 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:53.391902   21638 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
I0914 18:51:53.391949   21638 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:53.391959   21638 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:53.391976   21638 main.go:141] libmachine: Making call to close driver server
I0914 18:51:53.391988   21638 main.go:141] libmachine: (functional-281336) Calling .Close
I0914 18:51:53.392208   21638 main.go:141] libmachine: Successfully made call to close driver server
I0914 18:51:53.392226   21638 main.go:141] libmachine: Making call to close connection to plugin binary
I0914 18:51:53.392253   21638 main.go:141] libmachine: (functional-281336) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.513404186s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-281336
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr: (5.060250819s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-281336 docker-env) && out/minikube-linux-amd64 status -p functional-281336"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-281336 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr: (2.29044045s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.24174386s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-281336
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image load --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr: (4.701332967s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image save gcr.io/google-containers/addon-resizer:functional-281336 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
E0914 18:51:43.621378   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image save gcr.io/google-containers/addon-resizer:functional-281336 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.695846604s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image rm gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.02743297s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-281336
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-281336 image save --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-281336 image save --daemon gcr.io/google-containers/addon-resizer:functional-281336 --alsologtostderr: (1.872922944s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-281336
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-281336
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-281336
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-281336
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (288.02s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-436283 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0914 19:24:05.709606   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:24:10.872950   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-436283 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m14.175179606s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-436283 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-436283 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.00839364s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-436283 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-436283 addons enable gvisor: (4.38884131s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a00f6e85-c6cc-4de4-a984-b8628b356ae4] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.02923993s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-436283 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [70eb0c60-9bd5-4691-880c-1abf920bbe3d] Pending
helpers_test.go:344: "nginx-gvisor" [70eb0c60-9bd5-4691-880c-1abf920bbe3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [70eb0c60-9bd5-4691-880c-1abf920bbe3d] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.026486484s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-436283
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-436283: (1m32.440072263s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-436283 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-436283 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m3.277707921s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a00f6e85-c6cc-4de4-a984-b8628b356ae4] Running
E0914 19:28:29.910465   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.02330481s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [70eb0c60-9bd5-4691-880c-1abf920bbe3d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.076160583s
helpers_test.go:175: Cleaning up "gvisor-436283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-436283
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-436283: (1.278324102s)
--- PASS: TestGvisorAddon (288.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-888481 --driver=kvm2 
E0914 18:52:24.583238   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-888481 --driver=kvm2 : (52.590105248s)
--- PASS: TestImageBuild/serial/Setup (52.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-888481
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-888481: (1.74539255s)
--- PASS: TestImageBuild/serial/NormalBuild (1.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-888481
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-888481: (1.290850659s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-888481
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-888481
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (81.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-306177 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0914 18:53:46.504002   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-306177 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m21.640576891s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (81.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons enable ingress --alsologtostderr -v=5: (17.464671453s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-306177 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-306177 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.079135604s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-306177 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-306177 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a66f11e2-e1a5-4701-9062-cd38490928e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a66f11e2-e1a5-4701-9062-cd38490928e8] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.028224601s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-306177 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.20
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons disable ingress-dns --alsologtostderr -v=1: (12.479617599s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306177 addons disable ingress --alsologtostderr -v=1: (7.482395222s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.33s)

                                                
                                    
x
+
TestJSONOutput/start/Command (103.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-782624 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0914 18:56:02.658820   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:56:10.628126   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.633397   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.643717   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.664009   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.704408   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.784851   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:10.945299   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:11.265902   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:11.906943   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:13.187733   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:15.749535   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:20.869766   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:30.346082   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 18:56:31.110006   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 18:56:51.590982   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-782624 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m43.112759808s)
--- PASS: TestJSONOutput/start/Command (103.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-782624 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-782624 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-782624 --output=json --user=testUser
E0914 18:57:32.551938   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-782624 --output=json --user=testUser: (13.09511115s)
--- PASS: TestJSONOutput/stop/Command (13.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-671373 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-671373 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.77029ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b2448676-260f-47fa-b5df-19f8cc8a2d75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-671373] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"486d0d2f-c47f-4c51-b1c7-cb20e6552f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17217"}}
	{"specversion":"1.0","id":"ffec96f2-449d-49aa-9018-c88de6173c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"492aa976-78ea-4560-8939-da717123bb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig"}}
	{"specversion":"1.0","id":"b5446e28-c1af-4c45-a8a7-bc431b3bfd06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube"}}
	{"specversion":"1.0","id":"62959262-11df-4beb-ac81-a4a1bf7e7e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"38979fee-7a98-4b07-a2f4-27f601c13ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bfcbac6-8bee-4cb7-95d6-df4eb3b1d76e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-671373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-671373
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (107.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-601972 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-601972 --driver=kvm2 : (50.460025775s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-604261 --driver=kvm2 
E0914 18:58:54.472877   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-604261 --driver=kvm2 : (53.839862976s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-601972
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-604261
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-604261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-604261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-604261: (1.001915701s)
helpers_test.go:175: Cleaning up "first-601972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-601972
--- PASS: TestMinikubeProfile (107.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-104499 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0914 18:59:51.847542   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:51.852794   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:51.863051   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:51.883396   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:51.923690   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:52.004123   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-104499 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.845906462s)
E0914 18:59:52.164300   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:52.484885   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (29.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-104499 ssh -- ls /minikube-host
E0914 18:59:53.125042   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-104499 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120890 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0914 18:59:54.406138   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 18:59:56.966885   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 19:00:02.087994   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 19:00:12.328744   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120890 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.744215454s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-104499 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-120890
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-120890: (2.130082663s)
--- PASS: TestMountStart/serial/Stop (2.13s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120890
E0914 19:00:32.809894   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120890: (24.889496331s)
--- PASS: TestMountStart/serial/RestartStopped (25.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120890 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0914 19:01:02.659009   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:01:10.630066   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 19:01:13.770284   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 19:01:38.314189   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 19:02:35.690550   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m18.101124293s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-040952 -- rollout status deployment/busybox: (3.895333036s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-8xj5t -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-msf7r -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-8xj5t -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-msf7r -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-8xj5t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-msf7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-8xj5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-8xj5t -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-msf7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-040952 -- exec busybox-5bc68d56bd-msf7r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-040952 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-040952 -v 3 --alsologtostderr: (50.248286896s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.81s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp testdata/cp-test.txt multinode-040952:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952:/home/docker/cp-test.txt multinode-040952-m02:/home/docker/cp-test_multinode-040952_multinode-040952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test_multinode-040952_multinode-040952-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952:/home/docker/cp-test.txt multinode-040952-m03:/home/docker/cp-test_multinode-040952_multinode-040952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test_multinode-040952_multinode-040952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp testdata/cp-test.txt multinode-040952-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt multinode-040952:/home/docker/cp-test_multinode-040952-m02_multinode-040952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test_multinode-040952-m02_multinode-040952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m02:/home/docker/cp-test.txt multinode-040952-m03:/home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test_multinode-040952-m02_multinode-040952-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp testdata/cp-test.txt multinode-040952-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3444693695/001/cp-test_multinode-040952-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt multinode-040952:/home/docker/cp-test_multinode-040952-m03_multinode-040952.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952 "sudo cat /home/docker/cp-test_multinode-040952-m03_multinode-040952.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 cp multinode-040952-m03:/home/docker/cp-test.txt multinode-040952-m02:/home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 ssh -n multinode-040952-m02 "sudo cat /home/docker/cp-test_multinode-040952-m03_multinode-040952-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 node stop m03: (3.079391382s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040952 status: exit status 7 (434.149251ms)

                                                
                                                
-- stdout --
	multinode-040952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr: exit status 7 (431.177847ms)

                                                
                                                
-- stdout --
	multinode-040952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:04:19.921699   28832 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:04:19.921811   28832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:04:19.921821   28832 out.go:309] Setting ErrFile to fd 2...
	I0914 19:04:19.921828   28832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:04:19.922017   28832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:04:19.922191   28832 out.go:303] Setting JSON to false
	I0914 19:04:19.922228   28832 mustload.go:65] Loading cluster: multinode-040952
	I0914 19:04:19.922282   28832 notify.go:220] Checking for updates...
	I0914 19:04:19.922746   28832 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:04:19.922766   28832 status.go:255] checking status of multinode-040952 ...
	I0914 19:04:19.923214   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:19.923261   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:19.939418   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0914 19:04:19.939995   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:19.941021   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:19.941043   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:19.941398   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:19.941628   28832 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:04:19.943276   28832 status.go:330] multinode-040952 host status = "Running" (err=<nil>)
	I0914 19:04:19.943295   28832 host.go:66] Checking if "multinode-040952" exists ...
	I0914 19:04:19.943595   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:19.943625   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:19.958068   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0914 19:04:19.958439   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:19.958866   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:19.958890   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:19.959195   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:19.959366   28832 main.go:141] libmachine: (multinode-040952) Calling .GetIP
	I0914 19:04:19.962142   28832 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:04:19.962571   28832 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:04:19.962601   28832 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:04:19.962742   28832 host.go:66] Checking if "multinode-040952" exists ...
	I0914 19:04:19.963062   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:19.963098   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:19.978500   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42321
	I0914 19:04:19.978865   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:19.979246   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:19.979268   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:19.979566   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:19.979728   28832 main.go:141] libmachine: (multinode-040952) Calling .DriverName
	I0914 19:04:19.979896   28832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:04:19.979919   28832 main.go:141] libmachine: (multinode-040952) Calling .GetSSHHostname
	I0914 19:04:19.982273   28832 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:04:19.982664   28832 main.go:141] libmachine: (multinode-040952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:8d:f2", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:01:09 +0000 UTC Type:0 Mac:52:54:00:0b:8d:f2 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-040952 Clientid:01:52:54:00:0b:8d:f2}
	I0914 19:04:19.982698   28832 main.go:141] libmachine: (multinode-040952) DBG | domain multinode-040952 has defined IP address 192.168.39.14 and MAC address 52:54:00:0b:8d:f2 in network mk-multinode-040952
	I0914 19:04:19.982802   28832 main.go:141] libmachine: (multinode-040952) Calling .GetSSHPort
	I0914 19:04:19.983010   28832 main.go:141] libmachine: (multinode-040952) Calling .GetSSHKeyPath
	I0914 19:04:19.983147   28832 main.go:141] libmachine: (multinode-040952) Calling .GetSSHUsername
	I0914 19:04:19.983306   28832 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952/id_rsa Username:docker}
	I0914 19:04:20.077284   28832 ssh_runner.go:195] Run: systemctl --version
	I0914 19:04:20.083364   28832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:04:20.096848   28832 kubeconfig.go:92] found "multinode-040952" server: "https://192.168.39.14:8443"
	I0914 19:04:20.096875   28832 api_server.go:166] Checking apiserver status ...
	I0914 19:04:20.096904   28832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 19:04:20.108377   28832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1784/cgroup
	I0914 19:04:20.117038   28832 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod8756931ebb3ad632d1fa90a79d546b12/7ae1932584ffa476eb879f98377c768977f7c4e99f217802fcbba50c0c0f8eec"
	I0914 19:04:20.117110   28832 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8756931ebb3ad632d1fa90a79d546b12/7ae1932584ffa476eb879f98377c768977f7c4e99f217802fcbba50c0c0f8eec/freezer.state
	I0914 19:04:20.126043   28832 api_server.go:204] freezer state: "THAWED"
	I0914 19:04:20.126070   28832 api_server.go:253] Checking apiserver healthz at https://192.168.39.14:8443/healthz ...
	I0914 19:04:20.130722   28832 api_server.go:279] https://192.168.39.14:8443/healthz returned 200:
	ok
	I0914 19:04:20.130743   28832 status.go:421] multinode-040952 apiserver status = Running (err=<nil>)
	I0914 19:04:20.130753   28832 status.go:257] multinode-040952 status: &{Name:multinode-040952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:04:20.130776   28832 status.go:255] checking status of multinode-040952-m02 ...
	I0914 19:04:20.131085   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:20.131115   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:20.145384   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0914 19:04:20.145858   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:20.146276   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:20.146295   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:20.146672   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:20.146817   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:04:20.148270   28832 status.go:330] multinode-040952-m02 host status = "Running" (err=<nil>)
	I0914 19:04:20.148286   28832 host.go:66] Checking if "multinode-040952-m02" exists ...
	I0914 19:04:20.148554   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:20.148579   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:20.163038   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0914 19:04:20.163403   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:20.163835   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:20.163855   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:20.164145   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:20.164293   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetIP
	I0914 19:04:20.166992   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:04:20.167426   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:02:25 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:04:20.167459   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:04:20.167576   28832 host.go:66] Checking if "multinode-040952-m02" exists ...
	I0914 19:04:20.167857   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:20.167881   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:20.181661   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0914 19:04:20.182020   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:20.182430   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:20.182447   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:20.182773   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:20.182932   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .DriverName
	I0914 19:04:20.183101   28832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 19:04:20.183126   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHHostname
	I0914 19:04:20.185696   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:04:20.186169   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:0b:03", ip: ""} in network mk-multinode-040952: {Iface:virbr1 ExpiryTime:2023-09-14 20:02:25 +0000 UTC Type:0 Mac:52:54:00:2e:0b:03 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-040952-m02 Clientid:01:52:54:00:2e:0b:03}
	I0914 19:04:20.186206   28832 main.go:141] libmachine: (multinode-040952-m02) DBG | domain multinode-040952-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:2e:0b:03 in network mk-multinode-040952
	I0914 19:04:20.186343   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHPort
	I0914 19:04:20.186513   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHKeyPath
	I0914 19:04:20.186648   28832 main.go:141] libmachine: (multinode-040952-m02) Calling .GetSSHUsername
	I0914 19:04:20.186792   28832 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17217-7285/.minikube/machines/multinode-040952-m02/id_rsa Username:docker}
	I0914 19:04:20.284724   28832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 19:04:20.298061   28832 status.go:257] multinode-040952-m02 status: &{Name:multinode-040952-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:04:20.298124   28832 status.go:255] checking status of multinode-040952-m03 ...
	I0914 19:04:20.298460   28832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:04:20.298488   28832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:04:20.312933   28832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I0914 19:04:20.313337   28832 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:04:20.313832   28832 main.go:141] libmachine: Using API Version  1
	I0914 19:04:20.313852   28832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:04:20.314137   28832 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:04:20.314299   28832 main.go:141] libmachine: (multinode-040952-m03) Calling .GetState
	I0914 19:04:20.315765   28832 status.go:330] multinode-040952-m03 host status = "Stopped" (err=<nil>)
	I0914 19:04:20.315784   28832 status.go:343] host is not running, skipping remaining checks
	I0914 19:04:20.315791   28832 status.go:257] multinode-040952-m03 status: &{Name:multinode-040952-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.95s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 node start m03 --alsologtostderr: (31.49775992s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status
E0914 19:04:51.846625   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (112.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 stop
E0914 19:07:25.708590   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-040952 stop: (1m52.432991427s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040952 status: exit status 7 (76.798497ms)

                                                
                                                
-- stdout --
	multinode-040952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr: exit status 7 (71.757169ms)

                                                
                                                
-- stdout --
	multinode-040952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 19:08:40.175418   30356 out.go:296] Setting OutFile to fd 1 ...
	I0914 19:08:40.175676   30356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:08:40.175685   30356 out.go:309] Setting ErrFile to fd 2...
	I0914 19:08:40.175690   30356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0914 19:08:40.175899   30356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17217-7285/.minikube/bin
	I0914 19:08:40.176084   30356 out.go:303] Setting JSON to false
	I0914 19:08:40.176116   30356 mustload.go:65] Loading cluster: multinode-040952
	I0914 19:08:40.176213   30356 notify.go:220] Checking for updates...
	I0914 19:08:40.176530   30356 config.go:182] Loaded profile config "multinode-040952": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0914 19:08:40.176544   30356 status.go:255] checking status of multinode-040952 ...
	I0914 19:08:40.176882   30356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:08:40.176935   30356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:08:40.191413   30356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0914 19:08:40.192170   30356 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:08:40.192703   30356 main.go:141] libmachine: Using API Version  1
	I0914 19:08:40.192725   30356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:08:40.193038   30356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:08:40.193214   30356 main.go:141] libmachine: (multinode-040952) Calling .GetState
	I0914 19:08:40.194632   30356 status.go:330] multinode-040952 host status = "Stopped" (err=<nil>)
	I0914 19:08:40.194646   30356 status.go:343] host is not running, skipping remaining checks
	I0914 19:08:40.194651   30356 status.go:257] multinode-040952 status: &{Name:multinode-040952 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 19:08:40.194683   30356 status.go:255] checking status of multinode-040952-m02 ...
	I0914 19:08:40.194933   30356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0914 19:08:40.194964   30356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0914 19:08:40.208447   30356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0914 19:08:40.208805   30356 main.go:141] libmachine: () Calling .GetVersion
	I0914 19:08:40.209152   30356 main.go:141] libmachine: Using API Version  1
	I0914 19:08:40.209165   30356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0914 19:08:40.209491   30356 main.go:141] libmachine: () Calling .GetMachineName
	I0914 19:08:40.209639   30356 main.go:141] libmachine: (multinode-040952-m02) Calling .GetState
	I0914 19:08:40.211053   30356 status.go:330] multinode-040952-m02 host status = "Stopped" (err=<nil>)
	I0914 19:08:40.211063   30356 status.go:343] host is not running, skipping remaining checks
	I0914 19:08:40.211068   30356 status.go:257] multinode-040952-m02 status: &{Name:multinode-040952-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (112.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0914 19:09:51.847281   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040952 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m35.422133102s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-040952 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (56.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-040952
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040952-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-040952-m02 --driver=kvm2 : exit status 14 (58.166569ms)

                                                
                                                
-- stdout --
	* [multinode-040952-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-040952-m02' is duplicated with machine name 'multinode-040952-m02' in profile 'multinode-040952'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-040952-m03 --driver=kvm2 
E0914 19:11:02.658828   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:11:10.629103   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-040952-m03 --driver=kvm2 : (54.847002372s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-040952
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-040952: exit status 80 (218.842621ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-040952
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-040952-m03 already exists in multinode-040952-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-040952-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (56.16s)

                                                
                                    
x
+
TestPreload (182.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-789877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0914 19:12:33.674818   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-789877 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m39.196179344s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-789877 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-789877 image pull gcr.io/k8s-minikube/busybox: (1.342033798s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-789877
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-789877: (13.094442009s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-789877 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-789877 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m8.092311779s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-789877 image list
helpers_test.go:175: Cleaning up "test-preload-789877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-789877
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-789877: (1.009010473s)
--- PASS: TestPreload (182.94s)

                                                
                                    
x
+
TestScheduledStopUnix (123.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-643069 --memory=2048 --driver=kvm2 
E0914 19:14:51.846997   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-643069 --memory=2048 --driver=kvm2 : (51.522956017s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-643069 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-643069 -n scheduled-stop-643069
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-643069 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-643069 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-643069 -n scheduled-stop-643069
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-643069
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-643069 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0914 19:16:02.658200   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:16:10.629587   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 19:16:14.893443   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-643069
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-643069: exit status 7 (53.950991ms)

                                                
                                                
-- stdout --
	scheduled-stop-643069
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-643069 -n scheduled-stop-643069
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-643069 -n scheduled-stop-643069: exit status 7 (55.17976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-643069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-643069
--- PASS: TestScheduledStopUnix (123.03s)

                                                
                                    
x
+
TestSkaffold (139.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4054389096 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-872503 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-872503 --memory=2600 --driver=kvm2 : (51.064550005s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4054389096 run --minikube-profile skaffold-872503 --kube-context skaffold-872503 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4054389096 run --minikube-profile skaffold-872503 --kube-context skaffold-872503 --status-check=true --port-forward=false --interactive=false: (1m16.196829748s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7d7d45f4f5-x862t" [0f5e2e5c-0930-402d-8f74-c4bbfee1f7c3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016357621s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5745d948dc-jtxp4" [281c13e3-506d-4c6e-9cad-6c1abb7fbd47] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011830821s
helpers_test.go:175: Cleaning up "skaffold-872503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-872503
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-872503: (1.15274967s)
--- PASS: TestSkaffold (139.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (173.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.728430425.exe start -p running-upgrade-807818 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.728430425.exe start -p running-upgrade-807818 --memory=2200 --vm-driver=kvm2 : (1m44.060979158s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-807818 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-807818 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m7.442578323s)
helpers_test.go:175: Cleaning up "running-upgrade-807818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-807818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-807818: (1.317578634s)
--- PASS: TestRunningBinaryUpgrade (173.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (277.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (2m4.813784417s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-472226
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-472226: (4.620352845s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-472226 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-472226 status --format={{.Host}}: exit status 7 (59.441602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (49.151642693s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-472226 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (76.429357ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-472226] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-472226
	    minikube start -p kubernetes-upgrade-472226 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4722262 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-472226 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-472226 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2 : (1m37.556728631s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-472226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-472226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-472226: (1.166281891s)
--- PASS: TestKubernetesUpgrade (277.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (189.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2731438934.exe start -p stopped-upgrade-550152 --memory=2200 --vm-driver=kvm2 
E0914 19:19:51.847539   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2731438934.exe start -p stopped-upgrade-550152 --memory=2200 --vm-driver=kvm2 : (1m39.706418171s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2731438934.exe -p stopped-upgrade-550152 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2731438934.exe -p stopped-upgrade-550152 stop: (14.086098752s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-550152 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-550152 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m15.213612416s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (189.01s)

                                                
                                    
x
+
TestPause/serial/Start (105.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-569740 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0914 19:21:02.658508   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:21:10.628428   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-569740 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m45.823653049s)
--- PASS: TestPause/serial/Start (105.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-569740 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-569740 --alsologtostderr -v=1 --driver=kvm2 : (52.252366827s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-550152
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-550152: (1.051820596s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestPause/serial/Pause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-569740 --alsologtostderr -v=5
E0914 19:23:35.031747   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
--- PASS: TestPause/serial/Pause (0.63s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-569740 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-569740 --output=json --layout=cluster: exit status 2 (257.628618ms)

                                                
                                                
-- stdout --
	{"Name":"pause-569740","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-569740","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-569740 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-569740 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-569740 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-569740 --alsologtostderr -v=5: (1.04325022s)
--- PASS: TestPause/serial/DeletePaused (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (66.051163ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-674458] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17217
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17217-7285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17217-7285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674458 --driver=kvm2 
E0914 19:23:40.151935   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674458 --driver=kvm2 : (1m2.831740078s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674458 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --driver=kvm2 
E0914 19:24:51.833562   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:24:51.846669   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --driver=kvm2 : (41.306764234s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674458 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-674458 status -o json: exit status 2 (258.8066ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-674458","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-674458
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-674458: (1.104278704s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674458 --no-kubernetes --driver=kvm2 : (34.655540782s)
--- PASS: TestNoKubernetes/serial/Start (34.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m23.920776762s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674458 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674458 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.141513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (71.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0914 19:26:02.658388   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:26:10.628707   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 19:26:13.754678   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1m8.089432567s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.51074926s)
--- PASS: TestNoKubernetes/serial/ProfileList (71.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m19.722565361s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k9dml" [c2d58c57-e624-4090-8e2d-13abc3e35eab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k9dml" [c2d58c57-e624-4090-8e2d-13abc3e35eab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.011382077s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-674458
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-674458: (2.144866883s)
--- PASS: TestNoKubernetes/serial/Stop (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674458 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674458 --driver=kvm2 : (34.862754271s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (129.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m9.35972554s)
--- PASS: TestNetworkPlugins/group/calico/Start (129.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674458 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674458 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.296659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (130.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m10.250686545s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (130.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2ww5q" [aef89714-9a1b-4adb-a655-b1934ebc0ff1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023242309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xlp5x" [3d06f596-6494-43ad-968e-0d78bb065720] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xlp5x" [3d06f596-6494-43ad-968e-0d78bb065720] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011427415s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (103.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m43.539373704s)
--- PASS: TestNetworkPlugins/group/false/Start (103.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0914 19:28:57.595517   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:29:13.675969   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m4.533326533s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qcgks" [8c8d2c02-5672-48a9-b74c-5cb8a629fad5] Running
E0914 19:29:51.846848   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.039147317s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l7tcm" [4648b8c5-22cc-47c2-9bd7-c417aa8c07aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l7tcm" [4648b8c5-22cc-47c2-9bd7-c417aa8c07aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.011654379s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9pvsf" [cf6f3773-3294-488e-bba9-10637d7f9e9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9pvsf" [cf6f3773-3294-488e-bba9-10637d7f9e9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.027820044s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9sbqh" [87150585-c080-4453-aee4-5e95763cca6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9sbqh" [87150585-c080-4453-aee4-5e95763cca6b] Running
E0914 19:30:34.519977   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.525237   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.535460   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.555687   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.595933   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.676233   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:34.836538   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:35.157667   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:35.798008   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:30:37.078908   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.015277568s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m23.243243602s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m40.76042295s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (112.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0914 19:30:55.000589   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-452578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m52.547063512s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (112.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kmm64" [26f7d7e3-dbb7-46e4-aa8c-4087a1d3265e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:31:02.658306   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kmm64" [26f7d7e3-dbb7-46e4-aa8c-4087a1d3265e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.014102767s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (359.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-537886 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-537886 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (5m59.750024996s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (359.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f44tf" [5e0f56ee-4f57-4c29-aafe-937630facdd4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019086459s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gdmsg" [cfdb0af4-917f-42d5-867d-c1404f069d83] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:31:56.441114   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gdmsg" [cfdb0af4-917f-42d5-867d-c1404f069d83] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.013233506s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-452578 "pgrep -a kubelet"
E0914 19:32:11.496153   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7khv4" [cb0406aa-caca-4dd5-a422-548ec6ed5061] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:32:14.056628   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-7khv4" [cb0406aa-caca-4dd5-a422-548ec6ed5061] Running
E0914 19:32:19.177385   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.011188375s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-452578 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-452578 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.198527391s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-720521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:32:29.418196   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-720521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (1m32.385007848s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-452578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-452578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cxbbd" [bee8b5f8-6808-402a-aea9-0fc387e40865] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 19:32:49.898682   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cxbbd" [bee8b5f8-6808-402a-aea9-0fc387e40865] Running
E0914 19:32:54.894532   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.019467069s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (114.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-077703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-077703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (1m54.593890535s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (114.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-452578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-452578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.25s)
E0914 19:41:02.658387   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-917984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:33:21.289252   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:33:26.410070   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:33:29.909707   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:33:30.859596   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
E0914 19:33:36.650252   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:33:57.130446   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-917984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (1m26.093094817s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-720521 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e565004c-38bb-4ea6-a04a-b452b0a55583] Pending
helpers_test.go:344: "busybox" [e565004c-38bb-4ea6-a04a-b452b0a55583] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e565004c-38bb-4ea6-a04a-b452b0a55583] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.14226762s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-720521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-720521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-720521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295528242s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-720521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-720521 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-720521 --alsologtostderr -v=3: (13.123189471s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-720521 -n no-preload-720521
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-720521 -n no-preload-720521: exit status 7 (75.5311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-720521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (331.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-720521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:34:38.091457   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-720521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.1: (5m31.225044259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-720521 -n no-preload-720521
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (331.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-917984 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aeb09def-34c0-4251-a73c-8b34ea559df6] Pending
helpers_test.go:344: "busybox" [aeb09def-34c0-4251-a73c-8b34ea559df6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 19:34:48.223369   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.228892   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.239160   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.259590   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.300500   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.380869   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.541552   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:48.862632   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
helpers_test.go:344: "busybox" [aeb09def-34c0-4251-a73c-8b34ea559df6] Running
E0914 19:34:49.503836   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:50.784239   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:34:51.846833   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.023844806s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-917984 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-077703 create -f testdata/busybox.yaml
E0914 19:34:52.780560   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f79b81ba-4939-44d6-8b4c-29d4a6dfa2d0] Pending
helpers_test.go:344: "busybox" [f79b81ba-4939-44d6-8b4c-29d4a6dfa2d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 19:34:53.345216   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f79b81ba-4939-44d6-8b4c-29d4a6dfa2d0] Running
E0914 19:34:58.465378   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:35:00.046978   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.052235   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.062474   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.083117   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.123401   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.203714   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.364193   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:00.684867   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:01.325400   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.037956964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-077703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-917984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-917984 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-917984 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-917984 --alsologtostderr -v=3: (13.123425376s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-077703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 19:35:02.606124   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-077703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030564722s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-077703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-077703 --alsologtostderr -v=3
E0914 19:35:05.166584   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-077703 --alsologtostderr -v=3: (13.109583336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984: exit status 7 (61.889135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-917984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-917984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:35:08.706058   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:35:10.287157   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-917984 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.1: (5m36.936518263s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-077703 -n embed-certs-077703
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-077703 -n embed-certs-077703: exit status 7 (64.763812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-077703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (349.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-077703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:35:20.527439   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:25.231863   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.237104   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.247357   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.267586   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.307831   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.388141   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.548540   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:25.869415   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:26.509587   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:27.790338   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:29.186737   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:35:30.350666   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:34.519370   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:35:35.471446   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:41.007614   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:35:45.712650   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:35:55.985446   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:55.990755   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.001025   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.021820   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.062432   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.143268   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.304217   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:56.624712   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:57.265810   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:35:58.546776   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:36:00.012452   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:36:01.107957   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:36:02.201861   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
E0914 19:36:02.658589   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
E0914 19:36:06.193586   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:36:06.228800   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:36:10.146953   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:36:10.628495   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
E0914 19:36:16.469387   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:36:21.968159   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:36:36.949830   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:36:47.154641   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:36:50.279008   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.284293   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.294531   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.314785   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.355044   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.435349   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.595875   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:50.916529   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:51.556939   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:52.837955   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:36:55.398459   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:37:00.518855   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:37:08.936637   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
E0914 19:37:10.759863   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:37:11.836327   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:11.841635   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:11.851998   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:11.872258   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:11.912576   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:11.992968   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:12.153394   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:12.474014   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:13.114367   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:14.394775   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:16.955573   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:17.910992   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:37:22.076298   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-077703 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.1: (5m49.468034438s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-077703 -n embed-certs-077703
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (349.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-537886 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f1206b89-bbc9-4bfc-96c1-231b63d5ce67] Pending
helpers_test.go:344: "busybox" [f1206b89-bbc9-4bfc-96c1-231b63d5ce67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f1206b89-bbc9-4bfc-96c1-231b63d5ce67] Running
E0914 19:37:31.241046   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:37:32.067767   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:37:32.317214   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.025764479s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-537886 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-537886 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-537886 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-537886 --alsologtostderr -v=3
E0914 19:37:36.621382   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
E0914 19:37:43.888817   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:37:47.629074   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.634380   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.644643   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.664924   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.705068   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.785352   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:47.946203   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-537886 --alsologtostderr -v=3: (13.121591474s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-537886 -n old-k8s-version-537886
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-537886 -n old-k8s-version-537886: exit status 7 (67.243924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-537886 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0914 19:37:48.266575   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (451.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-537886 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0914 19:37:48.907368   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:50.188066   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:52.749240   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:37:52.797356   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:37:57.870058   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:38:08.111324   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:38:09.075312   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:38:12.201236   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:38:16.165752   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:38:28.592129   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:38:29.910666   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
E0914 19:38:33.757587   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:38:39.832036   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
E0914 19:38:43.853360   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kindnet-452578/client.crt: no such file or directory
E0914 19:39:09.552620   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:39:34.121786   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
E0914 19:39:48.223495   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
E0914 19:39:51.846749   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/ingress-addon-legacy-306177/client.crt: no such file or directory
E0914 19:39:52.956091   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/skaffold-872503/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-537886 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m31.085280232s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-537886 -n old-k8s-version-537886
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (451.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjhtg" [d53249d7-0342-4e4d-9272-aa3770bd9695] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0914 19:39:55.677891   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:40:00.047834   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjhtg" [d53249d7-0342-4e4d-9272-aa3770bd9695] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.019330864s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjhtg" [d53249d7-0342-4e4d-9272-aa3770bd9695] Running
E0914 19:40:15.907985   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/calico-452578/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01213201s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-720521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-720521 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-720521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-720521 -n no-preload-720521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-720521 -n no-preload-720521: exit status 2 (245.164837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-720521 -n no-preload-720521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-720521 -n no-preload-720521: exit status 2 (255.992308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-720521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-720521 -n no-preload-720521
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-720521 -n no-preload-720521
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:40:25.231787   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:40:27.729559   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/custom-flannel-452578/client.crt: no such file or directory
E0914 19:40:31.473289   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/kubenet-452578/client.crt: no such file or directory
E0914 19:40:34.520226   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/gvisor-436283/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (1m14.242240684s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h4467" [710de2c7-1dde-423f-bc33-f6f5738d5799] Running
E0914 19:40:45.710005   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/addons-417207/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0203166s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h4467" [710de2c7-1dde-423f-bc33-f6f5738d5799] Running
E0914 19:40:52.915792   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013759761s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-917984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-917984 "sudo crictl images -o json"
E0914 19:40:55.985319   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-917984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984: exit status 2 (285.537235ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984: exit status 2 (271.824718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-917984 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-917984 --alsologtostderr -v=1: (1.46665608s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-917984 -n default-k8s-diff-port-917984
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (22.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlhbr" [d08ad954-b74b-4dd2-8bae-dce7c463b374] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0914 19:41:10.629111   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/functional-281336/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlhbr" [d08ad954-b74b-4dd2-8bae-dce7c463b374] Running
E0914 19:41:23.672618   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/enable-default-cni-452578/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.017749473s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (22.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qlhbr" [d08ad954-b74b-4dd2-8bae-dce7c463b374] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013014235s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-077703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-077703 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-077703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-077703 -n embed-certs-077703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-077703 -n embed-certs-077703: exit status 2 (247.593091ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-077703 -n embed-certs-077703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-077703 -n embed-certs-077703: exit status 2 (245.024416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-077703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-077703 -n embed-certs-077703
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-077703 -n embed-certs-077703
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-320215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015515173s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-320215 --alsologtostderr -v=3
E0914 19:41:50.279161   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-320215 --alsologtostderr -v=3: (13.102313089s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320215 -n newest-cni-320215
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320215 -n newest-cni-320215: exit status 7 (54.648311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-320215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-320215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1
E0914 19:42:08.936650   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/auto-452578/client.crt: no such file or directory
E0914 19:42:11.835607   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
E0914 19:42:17.961989   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/flannel-452578/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-320215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.1: (46.656696783s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-320215 -n newest-cni-320215
E0914 19:42:39.518608   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/bridge-452578/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-320215 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-320215 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320215 -n newest-cni-320215
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320215 -n newest-cni-320215: exit status 2 (237.087313ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320215 -n newest-cni-320215
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320215 -n newest-cni-320215: exit status 2 (237.510649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-320215 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-320215 -n newest-cni-320215
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-320215 -n newest-cni-320215
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mjgjf" [36ef7dbd-6b63-4c16-b6c6-674267679964] Running
E0914 19:45:20.169247   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/no-preload-720521/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016738157s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mjgjf" [36ef7dbd-6b63-4c16-b6c6-674267679964] Running
E0914 19:45:25.231533   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/false-452578/client.crt: no such file or directory
E0914 19:45:26.880486   14506 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17217-7285/.minikube/profiles/default-k8s-diff-port-917984/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010288165s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-537886 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-537886 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-537886 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-537886 -n old-k8s-version-537886
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-537886 -n old-k8s-version-537886: exit status 2 (230.979592ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-537886 -n old-k8s-version-537886
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-537886 -n old-k8s-version-537886: exit status 2 (234.394529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-537886 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-537886 -n old-k8s-version-537886
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-537886 -n old-k8s-version-537886
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.28s)

                                                
                                    

Test skip (31/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-452578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452578"

                                                
                                                
----------------------- debugLogs end: cilium-452578 [took: 2.952788276s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-452578
--- SKIP: TestNetworkPlugins/group/cilium (3.10s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-198653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-198653
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard